Search (457 results, page 1 of 23)

  • × type_ss:"el"
  • × language_ss:"e"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.26
    0.26128206 = product of:
      0.5225641 = sum of:
        0.1175746 = product of:
          0.3527238 = sum of:
            0.3527238 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.3527238 = score(doc=1826,freq=2.0), product of:
                0.37656134 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044416238 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.052265707 = weight(_text_:web in 1826) [ClassicSimilarity], result of:
          0.052265707 = score(doc=1826,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.3527238 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.3527238 = score(doc=1826,freq=2.0), product of:
            0.37656134 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.044416238 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(3/6)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.13
    0.12541291 = product of:
      0.3762387 = sum of:
        0.094059676 = product of:
          0.28217903 = sum of:
            0.28217903 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.28217903 = score(doc=230,freq=2.0), product of:
                0.37656134 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044416238 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.28217903 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.28217903 = score(doc=230,freq=2.0), product of:
            0.37656134 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.044416238 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.33333334 = coord(2/6)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Austin, D.: How Google finds your needle in the Web's haystack : as we'll see, the trick is to ask the web itself to rank the importance of pages... (2006) 0.09
    0.09150508 = product of:
      0.13725762 = sum of:
        0.033718713 = weight(_text_:wide in 93) [ClassicSimilarity], result of:
          0.033718713 = score(doc=93,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.171337 = fieldWeight in 93, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=93)
        0.05174041 = weight(_text_:web in 93) [ClassicSimilarity], result of:
          0.05174041 = score(doc=93,freq=16.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.35694647 = fieldWeight in 93, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=93)
        0.02293892 = weight(_text_:computer in 93) [ClassicSimilarity], result of:
          0.02293892 = score(doc=93,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.14131951 = fieldWeight in 93, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02734375 = fieldNorm(doc=93)
        0.028859572 = product of:
          0.057719145 = sum of:
            0.057719145 = weight(_text_:programs in 93) [ClassicSimilarity], result of:
              0.057719145 = score(doc=93,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.22416902 = fieldWeight in 93, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=93)
          0.5 = coord(1/2)
      0.6666667 = coord(4/6)
    
    Abstract
    Imagine a library containing 25 billion documents but with no centralized organization and no librarians. In addition, anyone may add a document at any time without telling anyone. You may feel sure that one of the documents contained in the collection has a piece of information that is vitally important to you, and, being impatient like most of us, you'd like to find it in a matter of seconds. How would you go about doing it? Posed in this way, the problem seems impossible. Yet this description is not too different from the World Wide Web, a huge, highly-disorganized collection of documents in many different formats. Of course, we're all familiar with search engines (perhaps you found this article using one) so we know that there is a solution. This article will describe Google's PageRank algorithm and how it returns pages from the web's collection of 25 billion documents that match search criteria so well that "google" has become a widely used verb. Most search engines, including Google, continually run an army of computer programs that retrieve pages from the web, index the words in each document, and store this information in an efficient format. Each time a user asks for a web search using a search phrase, such as "search engine," the search engine determines all the pages on the web that contains the words in the search phrase. (Perhaps additional information such as the distance between the words "search" and "engine" will be noted as well.) Here is the problem: Google now claims to index 25 billion pages. Roughly 95% of the text in web pages is composed from a mere 10,000 words. This means that, for most searches, there will be a huge number of pages containing the words in the search phrase. What is needed is a means of ranking the importance of the pages that fit the search criteria so that the pages can be sorted with the most important pages at the top of the list. One way to determine the importance of pages is to use a human-generated ranking. For instance, you may have seen pages that consist mainly of a large number of links to other resources in a particular area of interest. Assuming the person maintaining this page is reliable, the pages referenced are likely to be useful. Of course, the list may quickly fall out of date, and the person maintaining the list may miss some important pages, either unintentionally or as a result of an unstated bias. Google's PageRank algorithm assesses the importance of web pages without human evaluation of the content. In fact, Google feels that the value of its service is largely in its ability to provide unbiased results to search queries; Google claims, "the heart of our software is PageRank." As we'll see, the trick is to ask the web itself to rank the importance of pages.
  4. Saabiyeh, N.: What is a good ontology semantic similarity measure that considers multiple inheritance cases of concepts? (2018) 0.09
    0.08834204 = product of:
      0.17668408 = sum of:
        0.067437425 = weight(_text_:wide in 4530) [ClassicSimilarity], result of:
          0.067437425 = score(doc=4530,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 4530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
        0.063368805 = weight(_text_:web in 4530) [ClassicSimilarity], result of:
          0.063368805 = score(doc=4530,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43716836 = fieldWeight in 4530, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
        0.04587784 = weight(_text_:computer in 4530) [ClassicSimilarity], result of:
          0.04587784 = score(doc=4530,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 4530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
      0.5 = coord(3/6)
    
    Abstract
    I need to measure semantic similarity between CSO ontology concepts, depending on Ontology structure (concept path, depth, least common subsumer (LCS) ...). CSO (Computer Science Ontology) is a large-scale ontology of research areas. A concepts in CSO may have multiple parents/super concepts (i.e. a concept may be a child of many other concepts), e.g. : (world wide web) is parent of (semantic web) (semantics) is parent of (semantic web) I found some measures that meet my needs, but the papers proposing these measures are not cited, so i got hesitated. I also found a measure that depends on weighted edges, but multiple inheritance (super concepts) is not considered..
  5. Daudaravicius, V.: ¬A framework for keyphrase extraction from scientific journals (2016) 0.07
    0.074950635 = product of:
      0.14990127 = sum of:
        0.067437425 = weight(_text_:wide in 2930) [ClassicSimilarity], result of:
          0.067437425 = score(doc=2930,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.036585998 = weight(_text_:web in 2930) [ClassicSimilarity], result of:
          0.036585998 = score(doc=2930,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.04587784 = weight(_text_:computer in 2930) [ClassicSimilarity], result of:
          0.04587784 = score(doc=2930,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.5 = coord(3/6)
    
    Abstract
    We present a framework for keyphrase extraction from scientific journals in diverse research fields. While journal articles are often provided with manually assigned keywords, it is not clear how to automatically extract keywords and measure their significance for a set of journal articles. We compare extracted keyphrases from journals in the fields of astrophysics, mathematics, physics, and computer science. We show that the presented statistics-based framework is able to demonstrate differences among journals, and that the extracted keyphrases can be used to represent journal or conference research topics, dynamics, and specificity.
    Content
    Vortrag, "Semantics, Analytics, Visualisation: Enhancing Scholarly Data Workshop co-located with the 25th International World Wide Web Conference April 11, 2016 - Montreal, Canada", Montreal 2016.
  6. ¬Third International World Wide Web Conference, Darmstadt 1995 : [Inhaltsverzeichnis] (1995) 0.07
    0.07074061 = product of:
      0.21222183 = sum of:
        0.12925258 = weight(_text_:wide in 3458) [ClassicSimilarity], result of:
          0.12925258 = score(doc=3458,freq=10.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.65677917 = fieldWeight in 3458, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3458)
        0.08296924 = weight(_text_:web in 3458) [ClassicSimilarity], result of:
          0.08296924 = score(doc=3458,freq=14.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.57238775 = fieldWeight in 3458, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3458)
      0.33333334 = coord(2/6)
    
    Abstract
    ANDREW, K. u. F. KAPPE: Serving information to the Web with Hyper-G; BARBIERI, K., H.M. DOERR u. D. DWYER: Creating a virtual classroom for interactive education on the Web; CAMPBELL, J.K., S.B. JONES, N.M. STEPHENS u. S. HURLEY: Constructing educational courseware using NCSA Mosaic and the World Wide Web; CATLEDGE, L.L. u. J.E. PITKOW: Characterizing browsing strategies in the World-Wide Web; CLAUSNITZER, A. u. P. VOGEL: A WWW interface to the OMNIS/Myriad literature retrieval engine; FISCHER, R. u. L. PERROCHON: IDLE: Unified W3-access to interactive information servers; FOLEY, J.D.: Visualizing the World-Wide Web with the navigational view builder; FRANKLIN, S.D. u. B. IBRAHIM: Advanced educational uses of the World-Wide Web; FUHR, N., U. PFEIFER u. T. HUYNH: Searching structured documents with the enhanced retrieval functionality of free WAIS-sf and SFgate; FIORITO, M., J. OKSANEN u. D.R. IOIVANE: An educational environment using WWW; KENT, R.E. u. C. NEUSS: Conceptual analysis of resource meta-information; SHELDON, M.A. u. R. WEISS: Discover: a resource discovery system based on content routing; WINOGRAD, T.: Beyond browsing: shared comments, SOAPs, Trails, and On-line communities
  7. Leighton, H.V.: Performance of four World Wide Web (WWW) index services : Infoseek, Lycos, WebCrawler and WWWWorm (1995) 0.06
    0.059441954 = product of:
      0.17832586 = sum of:
        0.11560701 = weight(_text_:wide in 3168) [ClassicSimilarity], result of:
          0.11560701 = score(doc=3168,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.5874411 = fieldWeight in 3168, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.09375 = fieldNorm(doc=3168)
        0.062718846 = weight(_text_:web in 3168) [ClassicSimilarity], result of:
          0.062718846 = score(doc=3168,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 3168, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3168)
      0.33333334 = coord(2/6)
    
  8. Cohen, S.; Fereira, J.; Horne, A.; Kibbee, B.; Mistlebauer, H.; Smith, A.: MyLibrary : personalized electronic services in the Cornell University Library (2000) 0.06
    0.055749726 = product of:
      0.11149945 = sum of:
        0.03853567 = weight(_text_:wide in 1232) [ClassicSimilarity], result of:
          0.03853567 = score(doc=1232,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 1232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1232)
        0.04674787 = weight(_text_:web in 1232) [ClassicSimilarity], result of:
          0.04674787 = score(doc=1232,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.32250395 = fieldWeight in 1232, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1232)
        0.02621591 = weight(_text_:computer in 1232) [ClassicSimilarity], result of:
          0.02621591 = score(doc=1232,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.16150802 = fieldWeight in 1232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=1232)
      0.5 = coord(3/6)
    
    Abstract
    Library users who are Web users expect customization and interactivity. MyLibrary is a Cornell University Library initiative to provide numerous personalized library services to Cornell University students, faculty, and staff. Currently, it consists of MyLinks, a tool for collecting and organizing resources for private use by a patron, and MyUpdates, a tool to help scholars stay informed of new resources provided by the library. This article provides an overview of the MyLibrary project, explains the rationale for the development of the service in the library, briefly discusses the hardware and software used for the service, and suggests some of the directions for future developments of the MyLibrary system. MyYahoo!, MyCNN, MyBookmarks, MyThis and MyThat. Internet users have demanded a personal face to the World Wide Web, and Web portals and information providers have responded. Why not MyLibrary? The Library and Information Technology Association (LITA) has defined MyLibrary-like services as the number one trend "worth keeping an eye on". "Library users who are Web users, a growing group," the experts agree, "expect customization, interactivity, and customer support. Approaches that are library-focused instead of user-focused will be increasingly irrelevant." In response to the needs of web-savvy patrons, the Cornell University Library (CUL) implemented a MyLibrary service this year, making finding and using library resources easier than ever. MyLibrary is an "umbrella" service for two new products: MyLinks and MyUpdates. Other products are in development. MyLibrary's MyLinks is a tool for collecting and organizing resources for private use by a patron. These resources may or may not be "official" Cornell University Library resources. Our patrons best understand this service as a "traveling set of bookmarks". Most patrons of the library use a variety of machines to access Internet resources. For example, you may have a computer at home and one at work. Why should you create your bookmarks twice, or carry around a diskette containing your bookmarks? Students who rely on lab computers never know which machine they will use next. With MyLinks, a patron's favorite sites are just a click away from any machine.
  9. Resource Description Framework (RDF) (2004) 0.05
    0.053565495 = product of:
      0.16069648 = sum of:
        0.07707134 = weight(_text_:wide in 3063) [ClassicSimilarity], result of:
          0.07707134 = score(doc=3063,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 3063, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=3063)
        0.08362513 = weight(_text_:web in 3063) [ClassicSimilarity], result of:
          0.08362513 = score(doc=3063,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5769126 = fieldWeight in 3063, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3063)
      0.33333334 = coord(2/6)
    
    Abstract
    The Resource Description Framework (RDF) integrates a variety of applications from library catalogs and world-wide directories to syndication and aggregation of news, software, and content to personal collections of music, photos, and events using XML as an interchange syntax. The RDF specifications provide a lightweight ontology system to support the exchange of knowledge on the Web. The W3C Semantic Web Activity Statement explains W3C's plans for RDF, including the RDF Core WG, Web Ontology and the RDF Interest Group.
    Theme
    Semantic Web
  10. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.05
    0.050085746 = product of:
      0.10017149 = sum of:
        0.04816959 = weight(_text_:wide in 2564) [ClassicSimilarity], result of:
          0.04816959 = score(doc=2564,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 2564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.036957435 = weight(_text_:web in 2564) [ClassicSimilarity], result of:
          0.036957435 = score(doc=2564,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25496176 = fieldWeight in 2564, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.030088935 = score(doc=2564,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
    Source
    http://vigna.di.unimi.it/ftp/papers/PageRankAsFunction.pdf [Proceedings of the ACM World Wide Web Conference (WWW), 2005]
  11. Bechhofer, S.; Harmelen, F. van; Hendler, J.; Horrocks, I.; McGuinness, D.L.; Patel-Schneider, P.F.; Stein, L.A.: OWL Web Ontology Language Reference (2004) 0.05
    0.049748734 = product of:
      0.1492462 = sum of:
        0.067437425 = weight(_text_:wide in 4684) [ClassicSimilarity], result of:
          0.067437425 = score(doc=4684,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 4684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4684)
        0.081808776 = weight(_text_:web in 4684) [ClassicSimilarity], result of:
          0.081808776 = score(doc=4684,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5643819 = fieldWeight in 4684, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4684)
      0.33333334 = coord(2/6)
    
    Abstract
    The Web Ontology Language OWL is a semantic markup language for publishing and sharing ontologies on the World Wide Web. OWL is developed as a vocabulary extension of RDF (the Resource Description Framework) and is derived from the DAML+OIL Web Ontology Language. This document contains a structured informal description of the full set of OWL language constructs and is meant to serve as a reference for OWL users who want to construct OWL ontologies.
    Theme
    Semantic Web
  12. Wright, H.: Semantic Web and ontologies (2018) 0.05
    0.049748734 = product of:
      0.1492462 = sum of:
        0.067437425 = weight(_text_:wide in 80) [ClassicSimilarity], result of:
          0.067437425 = score(doc=80,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 80, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=80)
        0.081808776 = weight(_text_:web in 80) [ClassicSimilarity], result of:
          0.081808776 = score(doc=80,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5643819 = fieldWeight in 80, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=80)
      0.33333334 = coord(2/6)
    
    Abstract
    The Semantic Web and ontologies can help archaeologists combine and share data, making it more open and useful. Archaeologists create diverse types of data, using a wide variety of technologies and methodologies. Like all research domains, these data are increasingly digital. The creation of data that are now openly and persistently available from disparate sources has also inspired efforts to bring archaeological resources together and make them more interoperable. This allows functionality such as federated cross-search across different datasets, and the mapping of heterogeneous data to authoritative structures to build a single data source. Ontologies provide the structure and relationships for Semantic Web data, and have been developed for use in cultural heritage applications generally, and archaeology specifically. A variety of online resources for archaeology now incorporate Semantic Web principles and technologies.
    Theme
    Semantic Web
  13. Herwijnen, E. van: SGML tutorial (1993) 0.05
    0.0493319 = product of:
      0.1479957 = sum of:
        0.06553978 = weight(_text_:computer in 8747) [ClassicSimilarity], result of:
          0.06553978 = score(doc=8747,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.40377006 = fieldWeight in 8747, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=8747)
        0.08245592 = product of:
          0.16491184 = sum of:
            0.16491184 = weight(_text_:programs in 8747) [ClassicSimilarity], result of:
              0.16491184 = score(doc=8747,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.6404829 = fieldWeight in 8747, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8747)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Contains extensive beginning and advanced interactive tutorials and exercises to teach SGML and uses DynaText software to manage, browse and search the text, thus demonstrating the features of one of the most widely known programs available for SGML marked-up text
    Issue
    Version 2. Computer file.
  14. Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; Ives, Z.: DBpedia: a nucleus for a Web of open data (2007) 0.05
    0.046163693 = product of:
      0.13849108 = sum of:
        0.09916721 = weight(_text_:web in 4260) [ClassicSimilarity], result of:
          0.09916721 = score(doc=4260,freq=20.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.6841342 = fieldWeight in 4260, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4260)
        0.039323866 = weight(_text_:computer in 4260) [ClassicSimilarity], result of:
          0.039323866 = score(doc=4260,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 4260, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4260)
      0.33333334 = coord(2/6)
    
    Abstract
    DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
    Series
    Lecture notes in computer science ; 4825
    Source
    ¬The Semantic Web : 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007 : proceedings. Ed.: Karl Aberer et al
    Theme
    Semantic Web
  15. RDF Primer : W3C Recommendation 10 February 2004 (2004) 0.05
    0.045401078 = product of:
      0.13620323 = sum of:
        0.07707134 = weight(_text_:wide in 3064) [ClassicSimilarity], result of:
          0.07707134 = score(doc=3064,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 3064, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=3064)
        0.059131898 = weight(_text_:web in 3064) [ClassicSimilarity], result of:
          0.059131898 = score(doc=3064,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4079388 = fieldWeight in 3064, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3064)
      0.33333334 = coord(2/6)
    
    Abstract
    The Resource Description Framework (RDF) is a language for representing information about resources in the World Wide Web. This Primer is designed to provide the reader with the basic knowledge required to effectively use RDF. It introduces the basic concepts of RDF and describes its XML syntax. It describes how to define RDF vocabularies using the RDF Vocabulary Description Language, and gives an overview of some deployed RDF applications. It also describes the content and purpose of other RDF specification documents.
    Theme
    Semantic Web
  16. Dushay, N.: Visualizing bibliographic metadata : a virtual (book) spine viewer (2004) 0.04
    0.044273444 = product of:
      0.08854689 = sum of:
        0.028901752 = weight(_text_:wide in 1197) [ClassicSimilarity], result of:
          0.028901752 = score(doc=1197,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.14686027 = fieldWeight in 1197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1197)
        0.015679711 = weight(_text_:web in 1197) [ClassicSimilarity], result of:
          0.015679711 = score(doc=1197,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.108171105 = fieldWeight in 1197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1197)
        0.043965418 = weight(_text_:computer in 1197) [ClassicSimilarity], result of:
          0.043965418 = score(doc=1197,freq=10.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.2708572 = fieldWeight in 1197, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1197)
      0.5 = coord(3/6)
    
    Abstract
    User interfaces for digital information discovery often require users to click around and read a lot of text in order to find the text they want to read-a process that is often frustrating and tedious. This is exacerbated because of the limited amount of text that can be displayed on a computer screen. To improve the user experience of computer mediated information discovery, information visualization techniques are applied to the digital library context, while retaining traditional information organization concepts. In this article, the "virtual (book) spine" and the virtual spine viewer are introduced. The virtual spine viewer is an application which allows users to visually explore large information spaces or collections while also allowing users to hone in on individual resources of interest. The virtual spine viewer introduced here is an alpha prototype, presented to promote discussion and further work. Information discovery changed radically with the introduction of computerized library access catalogs, the World Wide Web and its search engines, and online bookstores. Yet few instances of these technologies provide a user experience analogous to walking among well-organized, well-stocked bookshelves-which many people find useful as well as pleasurable. To put it another way, many of us have heard or voiced complaints about the paucity of "online browsing"-but what does this really mean? In traditional information spaces such as libraries, often we can move freely among the books and other resources. When we walk among organized, labeled bookshelves, we get a sense of the information space-we take in clues, perhaps unconsciously, as to the scope of the collection, the currency of resources, the frequency of their use, etc. We also enjoy unexpected discoveries such as finding an interesting resource because library staff deliberately located it near similar resources, or because it was miss-shelved, or because we saw it on a bookshelf on the way to the water fountain.
    When our experience of information discovery is mediated by a computer, we neither move ourselves nor the monitor. We have only the computer's monitor to view, and the keyboard and/or mouse to manipulate what is displayed there. Computer interfaces often reduce our ability to get a sense of the contents of a library: we don't perceive the scope of the library: its breadth, (the quantity of materials/information), its density (how full the shelves are, how thorough the collection is for individual topics), or the general audience for the materials (e.g., whether the materials are appropriate for middle school students, college professors, etc.). Additionally, many computer interfaces for information discovery require users to scroll through long lists, to click numerous navigational links and to read a lot of text to find the exact text they want to read. Text features of resources are almost always presented alphabetically, and the number of items in these alphabetical lists sometimes can be very long. Alphabetical ordering is certainly an improvement over no ordering, but it generally has no bearing on features with an inherent non-alphabetical ordering (e.g., dates of historical events), nor does it necessarily group similar items together. Alphabetical ordering of resources is analogous to one of the most familiar complaints about dictionaries: sometimes you need to know how to spell a word in order to look up its correct spelling in the dictionary. Some have used technology to replicate the appearance of physical libraries, presenting rooms of bookcases and shelves of book spines in virtual 3D environments. This approach presents a problem, as few book spines can be displayed legibly on a monitor screen. This article examines the role of book spines, call numbers, and other traditional organizational and information discovery concepts, and integrates this knowledge with information visualization techniques to show how computers and monitors can meet or exceed similar information discovery methods. The goal is to tap the unique potentials of current information visualization approaches in order to improve information discovery, offer new services, and most important of all, improve user satisfaction. We need to capitalize on what computers do well while bearing in mind their limitations. The intent is to design GUIs to optimize utility and provide a positive experience for the user.
  17. Peters, C.; Picchi, E.: Across languages, across cultures : issues in multilinguality and digital libraries (1997) 0.04
    0.04316772 = product of:
      0.12950316 = sum of:
        0.07707134 = weight(_text_:wide in 1233) [ClassicSimilarity], result of:
          0.07707134 = score(doc=1233,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 1233, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=1233)
        0.05243182 = weight(_text_:computer in 1233) [ClassicSimilarity], result of:
          0.05243182 = score(doc=1233,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 1233, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=1233)
      0.33333334 = coord(2/6)
    
    Abstract
    With the recent rapid diffusion over the international computer networks of world-wide distributed document bases, the question of multilingual access and multilingual information retrieval is becoming increasingly relevant. We briefly discuss just some of the issues that must be addressed in order to implement a multilingual interface for a Digital Library system and describe our own approach to this problem.
  18. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.04
    0.042866588 = product of:
      0.085733175 = sum of:
        0.024084795 = weight(_text_:wide in 2467) [ClassicSimilarity], result of:
          0.024084795 = score(doc=2467,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.122383565 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.045263432 = weight(_text_:web in 2467) [ClassicSimilarity], result of:
          0.045263432 = score(doc=2467,freq=24.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3122631 = fieldWeight in 2467, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.016384944 = weight(_text_:computer in 2467) [ClassicSimilarity], result of:
          0.016384944 = score(doc=2467,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.100942515 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
      0.5 = coord(3/6)
    
    Abstract
    This is a classified, annotated bibliography about how to design faceted classification systems and make them usable on the World Wide Web. It is the first of three works I will be doing. The second, based on the material here and elsewhere, will discuss how to actually make the faceted system and put it online. The third will be a report of how I did just that, what worked, what didn't, and what I learned. Almost every article or book listed here begins with an explanation of what a faceted classification system is, so I won't (but see Steckel in Background below if you don't already know). They all agree that faceted systems are very appropriate for the web. Even pre-web articles (such as Duncan's in Background, below) assert that hypertext and facets will go together well. Combined, it is possible to take a set of documents and classify them or apply subject headings to describe what they are about, then build a navigational structure so that any user, no matter how he or she approaches the material, no matter what his or her goals, can move and search in a way that makes sense to them, but still get to the same useful results as someone else following a different path to the same goal. There is no one way that everyone will always use when looking for information. The more flexible the organization of the information, the more accommodating it is. Facets are more flexible for hypertext browsing than any enumerative or hierarchical system.
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
  19. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.04
    0.04241117 = product of:
      0.1272335 = sum of:
        0.03853567 = weight(_text_:wide in 79) [ClassicSimilarity], result of:
          0.03853567 = score(doc=79,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.08869784 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.08869784 = score(doc=79,freq=36.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.33333334 = coord(2/6)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web
  20. Martínez-González, M.M.; Alvite-Díez, M.L.: Thesauri and Semantic Web : discussion of the evolution of thesauri toward their integration with the Semantic Web (2019) 0.04
    0.042189382 = product of:
      0.12656814 = sum of:
        0.04816959 = weight(_text_:wide in 5997) [ClassicSimilarity], result of:
          0.04816959 = score(doc=5997,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.078398556 = weight(_text_:web in 5997) [ClassicSimilarity], result of:
          0.078398556 = score(doc=5997,freq=18.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5408555 = fieldWeight in 5997, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.33333334 = coord(2/6)
    
    Abstract
    Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.
    Theme
    Semantic Web

Years

Types

  • a 206
  • n 13
  • r 9
  • s 9
  • x 6
  • p 3
  • i 2
  • m 2
  • More… Less…