Search (1762 results, page 2 of 89)

  • × language_ss:"e"
  • × year_i:[2010 TO 2020}
  1. Ding, Y.; Jacob, E.K.; Fried, M.; Toma, I.; Yan, E.; Foo, S.; Milojevicacute, S.: Upper tag ontology for integrating social tagging data (2010) 0.08
    0.076250955 = product of:
      0.30500382 = sum of:
        0.21784876 = weight(_text_:tagging in 3421) [ClassicSimilarity], result of:
          0.21784876 = score(doc=3421,freq=14.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            1.0354816 = fieldWeight in 3421, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.046875 = fieldNorm(doc=3421)
        0.043577533 = weight(_text_:web in 3421) [ClassicSimilarity], result of:
          0.043577533 = score(doc=3421,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 3421, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3421)
        0.043577533 = weight(_text_:web in 3421) [ClassicSimilarity], result of:
          0.043577533 = score(doc=3421,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 3421, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3421)
      0.25 = coord(3/12)
    
    Abstract
    Data integration and mediation have become central concerns of information technology over the past few decades. With the advent of the Web and the rapid increases in the amount of data and the number of Web documents and users, researchers have focused on enhancing the interoperability of data through the development of metadata schemes. Other researchers have looked to the wealth of metadata generated by bookmarking sites on the Social Web. While several existing ontologies have capitalized on the semantics of metadata created by tagging activities, the Upper Tag Ontology (UTO) emphasizes the structure of tagging activities to facilitate modeling of tagging data and the integration of data from different bookmarking sites as well as the alignment of tagging ontologies. UTO is described and its utility in modeling, harvesting, integrating, searching, and analyzing data is demonstrated with metadata harvested from three major social tagging systems (Delicious, Flickr, and YouTube).
    Theme
    Social tagging
  2. Lee, Y.Y.; Yang, S.Q.: Folksonomies as subject access : a survey of tagging in library online catalogs and discovery layers (2012) 0.07
    0.07433406 = product of:
      0.29733625 = sum of:
        0.24701725 = weight(_text_:tagging in 309) [ClassicSimilarity], result of:
          0.24701725 = score(doc=309,freq=18.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            1.1741256 = fieldWeight in 309, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.046875 = fieldNorm(doc=309)
        0.025159499 = weight(_text_:web in 309) [ClassicSimilarity], result of:
          0.025159499 = score(doc=309,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 309, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=309)
        0.025159499 = weight(_text_:web in 309) [ClassicSimilarity], result of:
          0.025159499 = score(doc=309,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 309, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=309)
      0.25 = coord(3/12)
    
    Abstract
    This paper describes a survey on how system vendors and libraries handled tagging in OPACs and discovery layers. Tags are user added subject metadata, also called folksonomies. This survey also investigated user behavior when they face the possibility to tag. The findings indicate that legacy/classic systems have no tagging capability. About 47% of the discovery tools provide tagging function. About 49% of the libraries that have a system with tagging capability have turned the tagging function on in their OPACs and discovery tools. Only 40% of the libraries that turned tagging on actually utilized user added subject metadata as access point to collections. Academic library users are less active in tagging than public library users.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
    Theme
    Social tagging
  3. Keyser, P. de: Indexing : from thesauri to the Semantic Web (2012) 0.07
    0.0733597 = product of:
      0.22007908 = sum of:
        0.08233908 = weight(_text_:tagging in 3197) [ClassicSimilarity], result of:
          0.08233908 = score(doc=3197,freq=2.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.39137518 = fieldWeight in 3197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.046875 = fieldNorm(doc=3197)
        0.06162794 = weight(_text_:web in 3197) [ClassicSimilarity], result of:
          0.06162794 = score(doc=3197,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5299281 = fieldWeight in 3197, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3197)
        0.06162794 = weight(_text_:web in 3197) [ClassicSimilarity], result of:
          0.06162794 = score(doc=3197,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5299281 = fieldWeight in 3197, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3197)
        0.014484116 = product of:
          0.028968232 = sum of:
            0.028968232 = weight(_text_:22 in 3197) [ClassicSimilarity], result of:
              0.028968232 = score(doc=3197,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.23214069 = fieldWeight in 3197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3197)
          0.5 = coord(1/2)
      0.33333334 = coord(4/12)
    
    Abstract
    Indexing consists of both novel and more traditional techniques. Cutting-edge indexing techniques, such as automatic indexing, ontologies, and topic maps, were developed independently of older techniques such as thesauri, but it is now recognized that these older methods also hold expertise. Indexing describes various traditional and novel indexing techniques, giving information professionals and students of library and information sciences a broad and comprehensible introduction to indexing. This title consists of twelve chapters: an Introduction to subject readings and theasauri; Automatic indexing versus manual indexing; Techniques applied in automatic indexing of text material; Automatic indexing of images; The black art of indexing moving images; Automatic indexing of music; Taxonomies and ontologies; Metadata formats and indexing; Tagging; Topic maps; Indexing the web; and The Semantic Web.
    Date
    24. 8.2016 14:03:22
    RSWK
    Semantic Web
    Subject
    Semantic Web
    Theme
    Semantic Web
  4. Belém, F.M.; Almeida, J.M.; Gonçalves, M.A.: ¬A survey on tag recommendation methods : a review (2017) 0.07
    0.07275806 = product of:
      0.21827418 = sum of:
        0.068615906 = weight(_text_:tagging in 3524) [ClassicSimilarity], result of:
          0.068615906 = score(doc=3524,freq=2.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.326146 = fieldWeight in 3524, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.029650755 = weight(_text_:web in 3524) [ClassicSimilarity], result of:
          0.029650755 = score(doc=3524,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 3524, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.029650755 = weight(_text_:web in 3524) [ClassicSimilarity], result of:
          0.029650755 = score(doc=3524,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 3524, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.09035677 = sum of:
          0.06621657 = weight(_text_:2.0 in 3524) [ClassicSimilarity], result of:
            0.06621657 = score(doc=3524,freq=2.0), product of:
              0.20667298 = queryWeight, product of:
                5.799733 = idf(docFreq=363, maxDocs=44218)
                0.035634913 = queryNorm
              0.320393 = fieldWeight in 3524, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.799733 = idf(docFreq=363, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3524)
          0.024140194 = weight(_text_:22 in 3524) [ClassicSimilarity], result of:
            0.024140194 = score(doc=3524,freq=2.0), product of:
              0.12478739 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.035634913 = queryNorm
              0.19345059 = fieldWeight in 3524, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3524)
      0.33333334 = coord(4/12)
    
    Abstract
    Tags (keywords freely assigned by users to describe web content) have become highly popular on Web 2.0 applications, because of the strong stimuli and easiness for users to create and describe their own content. This increase in tag popularity has led to a vast literature on tag recommendation methods. These methods aim at assisting users in the tagging process, possibly increasing the quality of the generated tags and, consequently, improving the quality of the information retrieval (IR) services that rely on tags as data sources. Regardless of the numerous and diversified previous studies on tag recommendation, to our knowledge, no previous work has summarized and organized them into a single survey article. In this article, we propose a taxonomy for tag recommendation methods, classifying them according to the target of the recommendations, their objectives, exploited data sources, and underlying techniques. Moreover, we provide a critical overview of these methods, pointing out their advantages and disadvantages. Finally, we describe the main open challenges related to the field, such as tag ambiguity, cold start, and evaluation issues.
    Date
    16.11.2017 13:30:22
  5. Konkova, E.; Göker, A.; Butterworth, R.; MacFarlane, A.: Social tagging: exploring the image, the tags, and the game (2014) 0.07
    0.072252646 = product of:
      0.28901058 = sum of:
        0.21784876 = weight(_text_:tagging in 1370) [ClassicSimilarity], result of:
          0.21784876 = score(doc=1370,freq=14.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            1.0354816 = fieldWeight in 1370, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.046875 = fieldNorm(doc=1370)
        0.035580907 = weight(_text_:web in 1370) [ClassicSimilarity], result of:
          0.035580907 = score(doc=1370,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3059541 = fieldWeight in 1370, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1370)
        0.035580907 = weight(_text_:web in 1370) [ClassicSimilarity], result of:
          0.035580907 = score(doc=1370,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3059541 = fieldWeight in 1370, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1370)
      0.25 = coord(3/12)
    
    Abstract
    Large image collections on the Web need to be organized for effective retrieval. Metadata has a key role in image retrieval but rely on professionally assigned tags which is not a viable option. Current content-based image retrieval systems have not demonstrated sufficient utility on large-scale image sources on the web, and are usually used as a supplement to existing text-based image retrieval systems. We present two social tagging alternatives in the form of photo-sharing networks and image labeling games. Here we analyze these applications to evaluate their usefulness from the semantic point of view, investigating the management of social tagging for indexing. The findings of the study have shown that social tagging can generate a sizeable number of tags that can be classified as in terpretive for an image, and that tagging behaviour has a manageable and adjustable nature depending on tagging guidelines.
    Theme
    Social tagging
  6. Heuvel, C. van den: Multidimensional classifications : past and future conceptualizations and visualizations (2012) 0.07
    0.07101037 = product of:
      0.1704249 = sum of:
        0.02935275 = weight(_text_:web in 632) [ClassicSimilarity], result of:
          0.02935275 = score(doc=632,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=632)
        0.040716566 = weight(_text_:world in 632) [ClassicSimilarity], result of:
          0.040716566 = score(doc=632,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=632)
        0.05410469 = weight(_text_:wide in 632) [ClassicSimilarity], result of:
          0.05410469 = score(doc=632,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.342674 = fieldWeight in 632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=632)
        0.02935275 = weight(_text_:web in 632) [ClassicSimilarity], result of:
          0.02935275 = score(doc=632,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=632)
        0.016898135 = product of:
          0.03379627 = sum of:
            0.03379627 = weight(_text_:22 in 632) [ClassicSimilarity], result of:
              0.03379627 = score(doc=632,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.2708308 = fieldWeight in 632, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=632)
          0.5 = coord(1/2)
      0.41666666 = coord(5/12)
    
    Abstract
    This paper maps the concepts "space" and "dimensionality" in classifications, in particular in visualizations hereof, from a historical perspective. After a historical excursion in the domain of classification theory of what in mathematics is known as dimensionality reduction in representations of a single universe of knowledge, its potentiality will be explored for information retrieval and navigation in the multiverse of the World Wide Web.
    Date
    22. 2.2013 11:31:25
  7. Choi, Y.: ¬A complete assessment of tagging quality : a consolidated methodology (2015) 0.07
    0.07080228 = product of:
      0.28320912 = sum of:
        0.23289011 = weight(_text_:tagging in 1730) [ClassicSimilarity], result of:
          0.23289011 = score(doc=1730,freq=16.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            1.1069763 = fieldWeight in 1730, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.046875 = fieldNorm(doc=1730)
        0.025159499 = weight(_text_:web in 1730) [ClassicSimilarity], result of:
          0.025159499 = score(doc=1730,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 1730, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1730)
        0.025159499 = weight(_text_:web in 1730) [ClassicSimilarity], result of:
          0.025159499 = score(doc=1730,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 1730, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1730)
      0.25 = coord(3/12)
    
    Abstract
    This paper presents a methodological discussion of a study of tagging quality in subject indexing. The data analysis in the study was divided into 3 phases: analysis of indexing consistency, analysis of tagging effectiveness, and analysis of the semantic values of tags. To analyze indexing consistency, this study employed the vector space model-based indexing consistency measures. An analysis of tagging effectiveness with tagging exhaustivity and tag specificity was conducted to ameliorate the drawbacks of consistency analysis based on only the quantitative measures of vocabulary matching. To further investigate the semantic values of tags at various levels of specificity, a latent semantic analysis (LSA) was conducted. To test statistical significance for the relation between tag specificity and semantic quality, correlation analysis was conducted. This research demonstrates the potential of tags for web document indexing with a complete assessment of tagging quality and provides a basis for further study of the strengths and limitations of tagging.
    Theme
    Social tagging
  8. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.07
    0.07018908 = product of:
      0.1684538 = sum of:
        0.04108529 = weight(_text_:web in 168) [ClassicSimilarity], result of:
          0.04108529 = score(doc=168,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.35328537 = fieldWeight in 168, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.032903954 = weight(_text_:world in 168) [ClassicSimilarity], result of:
          0.032903954 = score(doc=168,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.24022943 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.043723192 = weight(_text_:wide in 168) [ClassicSimilarity], result of:
          0.043723192 = score(doc=168,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.2769224 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.04108529 = weight(_text_:web in 168) [ClassicSimilarity], result of:
          0.04108529 = score(doc=168,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.35328537 = fieldWeight in 168, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.009656077 = product of:
          0.019312155 = sum of:
            0.019312155 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.019312155 = score(doc=168,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.41666666 = coord(5/12)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    LCSH
    World wide web
    RSWK
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Subject
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    World wide web
  9. Oliveira Machado, L.M.; Souza, R.R.; Simões, M. da Graça: Semantic web or web of data? : a diachronic study (1999 to 2017) of the publications of Tim Berners-Lee and the World Wide Web Consortium (2019) 0.07
    0.06893462 = product of:
      0.20680386 = sum of:
        0.06953719 = weight(_text_:web in 5300) [ClassicSimilarity], result of:
          0.06953719 = score(doc=5300,freq=22.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.59793836 = fieldWeight in 5300, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.029083263 = weight(_text_:world in 5300) [ClassicSimilarity], result of:
          0.029083263 = score(doc=5300,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.21233483 = fieldWeight in 5300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.038646206 = weight(_text_:wide in 5300) [ClassicSimilarity], result of:
          0.038646206 = score(doc=5300,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.24476713 = fieldWeight in 5300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.06953719 = weight(_text_:web in 5300) [ClassicSimilarity], result of:
          0.06953719 = score(doc=5300,freq=22.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.59793836 = fieldWeight in 5300, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
      0.33333334 = coord(4/12)
    
    Abstract
    The web has been, in the last decades, the place where information retrieval achieved its maximum importance, given its ubiquity and the sheer volume of information. However, its exponential growth made the retrieval task increasingly hard, relying in its effectiveness on idiosyncratic and somewhat biased ranking algorithms. To deal with this problem, a "new" web, called the Semantic Web (SW), was proposed, bringing along concepts like "Web of Data" and "Linked Data," although the definitions and connections among these concepts are often unclear. Based on a qualitative approach built over a literature review, a definition of SW is presented, discussing the related concepts sometimes used as synonyms. It concludes that the SW is a comprehensive and ambitious construct that includes the great purpose of making the web a global database. It also follows the specifications developed and/or associated with its operationalization and the necessary procedures for the connection of data in an open format on the web. The goals of this comprehensive SW are the union of two outcomes still tenuously connected: the virtually unlimited possibility of connections between data-the web domain-with the potentiality of the automated inference of "intelligent" systems-the semantic component.
    Theme
    Semantic Web
  10. Vaidya, P.; Harinarayana, N.S.: ¬The comparative and analytical study of LibraryThing tags with Library of Congress Subject Headings (2016) 0.07
    0.06883134 = product of:
      0.206494 = sum of:
        0.11644506 = weight(_text_:tagging in 2492) [ClassicSimilarity], result of:
          0.11644506 = score(doc=2492,freq=4.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.55348814 = fieldWeight in 2492, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.046875 = fieldNorm(doc=2492)
        0.025159499 = weight(_text_:web in 2492) [ClassicSimilarity], result of:
          0.025159499 = score(doc=2492,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 2492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2492)
        0.025159499 = weight(_text_:web in 2492) [ClassicSimilarity], result of:
          0.025159499 = score(doc=2492,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 2492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2492)
        0.039729945 = product of:
          0.07945989 = sum of:
            0.07945989 = weight(_text_:2.0 in 2492) [ClassicSimilarity], result of:
              0.07945989 = score(doc=2492,freq=2.0), product of:
                0.20667298 = queryWeight, product of:
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.035634913 = queryNorm
                0.3844716 = fieldWeight in 2492, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2492)
          0.5 = coord(1/2)
      0.33333334 = coord(4/12)
    
    Abstract
    The internet in its Web 2.0 version has given an opportunity among users to be participative and the chance to enhance the existing system, which makes it dynamic and collaborative. The activity of social tagging among researchers to organize the digital resources is an interesting study among information professionals. The one way of organizing the resources for future retrieval through these user-generated terms makes an interesting analysis by comparing them with professionally created controlled vocabularies. Here in this study, an attempt has been made to compare Library of Congress Subject Headings (LCSH) terms with LibraryThing social tags. In this comparative analysis, the results show that social tags can be used to enhance the metadata for information retrieval. But still, the uncontrolled nature of social tags is a concern and creates uncertainty among researchers.
    Theme
    Social tagging
  11. Haustein, S.; Sugimoto, C.; Larivière, V.: Social media in scholarly communication : Guest editorial (2015) 0.07
    0.06770995 = product of:
      0.1625039 = sum of:
        0.08155354 = weight(_text_:filter in 3809) [ClassicSimilarity], result of:
          0.08155354 = score(doc=3809,freq=4.0), product of:
            0.24899386 = queryWeight, product of:
              6.987357 = idf(docFreq=110, maxDocs=44218)
              0.035634913 = queryNorm
            0.32753235 = fieldWeight in 3809, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.987357 = idf(docFreq=110, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
        0.028129177 = weight(_text_:web in 3809) [ClassicSimilarity], result of:
          0.028129177 = score(doc=3809,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.24187797 = fieldWeight in 3809, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
        0.017449958 = weight(_text_:world in 3809) [ClassicSimilarity], result of:
          0.017449958 = score(doc=3809,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.12740089 = fieldWeight in 3809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
        0.028129177 = weight(_text_:web in 3809) [ClassicSimilarity], result of:
          0.028129177 = score(doc=3809,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.24187797 = fieldWeight in 3809, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
        0.007242058 = product of:
          0.014484116 = sum of:
            0.014484116 = weight(_text_:22 in 3809) [ClassicSimilarity], result of:
              0.014484116 = score(doc=3809,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.116070345 = fieldWeight in 3809, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3809)
          0.5 = coord(1/2)
      0.41666666 = coord(5/12)
    
    Abstract
    One of the solutions to help scientists filter the most relevant publications and, thus, to stay current on developments in their fields during the transition from "little science" to "big science", was the introduction of citation indexing as a Wellsian "World Brain" (Garfield, 1964) of scientific information: It is too much to expect a research worker to spend an inordinate amount of time searching for the bibliographic descendants of antecedent papers. It would not be excessive to demand that the thorough scholar check all papers that have cited or criticized such papers, if they could be located quickly. The citation index makes this check practicable (Garfield, 1955, p. 108). In retrospective, citation indexing can be perceived as a pre-social web version of crowdsourcing, as it is based on the concept that the community of citing authors outperforms indexers in highlighting cognitive links between papers, particularly on the level of specific ideas and concepts (Garfield, 1983). Over the last 50 years, citation analysis and more generally, bibliometric methods, have developed from information retrieval tools to research evaluation metrics, where they are presumed to make scientific funding more efficient and effective (Moed, 2006). However, the dominance of bibliometric indicators in research evaluation has also led to significant goal displacement (Merton, 1957) and the oversimplification of notions of "research productivity" and "scientific quality", creating adverse effects such as salami publishing, honorary authorships, citation cartels, and misuse of indicators (Binswanger, 2015; Cronin and Sugimoto, 2014; Frey and Osterloh, 2006; Haustein and Larivière, 2015; Weingart, 2005).
    Furthermore, the rise of the web, and subsequently, the social web, has challenged the quasi-monopolistic status of the journal as the main form of scholarly communication and citation indices as the primary assessment mechanisms. Scientific communication is becoming more open, transparent, and diverse: publications are increasingly open access; manuscripts, presentations, code, and data are shared online; research ideas and results are discussed and criticized openly on blogs; and new peer review experiments, with open post publication assessment by anonymous or non-anonymous referees, are underway. The diversification of scholarly production and assessment, paired with the increasing speed of the communication process, leads to an increased information overload (Bawden and Robinson, 2008), demanding new filters. The concept of altmetrics, short for alternative (to citation) metrics, was created out of an attempt to provide a filter (Priem et al., 2010) and to steer against the oversimplification of the measurement of scientific success solely on the basis of number of journal articles published and citations received, by considering a wider range of research outputs and metrics (Piwowar, 2013). Although the term altmetrics was introduced in a tweet in 2010 (Priem, 2010), the idea of capturing traces - "polymorphous mentioning" (Cronin et al., 1998, p. 1320) - of scholars and their documents on the web to measure "impact" of science in a broader manner than citations was introduced years before, largely in the context of webometrics (Almind and Ingwersen, 1997; Thelwall et al., 2005):
    There will soon be a critical mass of web-based digital objects and usage statistics on which to model scholars' communication behaviors - publishing, posting, blogging, scanning, reading, downloading, glossing, linking, citing, recommending, acknowledging - and with which to track their scholarly influence and impact, broadly conceived and broadly felt (Cronin, 2005, p. 196). A decade after Cronin's prediction and five years after the coining of altmetrics, the time seems ripe to reflect upon the role of social media in scholarly communication. This Special Issue does so by providing an overview of current research on the indicators and metrics grouped under the umbrella term of altmetrics, on their relationships with traditional indicators of scientific activity, and on the uses that are made of the various social media platforms - on which these indicators are based - by scientists of various disciplines.
    Date
    20. 1.2015 18:30:22
  12. Das, A.; Jain, A.: Indexing the World Wide Web : the journey so far (2012) 0.07
    0.06736527 = product of:
      0.20209579 = sum of:
        0.043577533 = weight(_text_:web in 95) [ClassicSimilarity], result of:
          0.043577533 = score(doc=95,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 95, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=95)
        0.049355935 = weight(_text_:world in 95) [ClassicSimilarity], result of:
          0.049355935 = score(doc=95,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.36034414 = fieldWeight in 95, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=95)
        0.06558479 = weight(_text_:wide in 95) [ClassicSimilarity], result of:
          0.06558479 = score(doc=95,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.4153836 = fieldWeight in 95, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=95)
        0.043577533 = weight(_text_:web in 95) [ClassicSimilarity], result of:
          0.043577533 = score(doc=95,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 95, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=95)
      0.33333334 = coord(4/12)
    
    Abstract
    In this chapter, the authors describe the key indexing components of today's web search engines. As the World Wide Web has grown, the systems and methods for indexing have changed significantly. The authors present the data structures used, the features extracted, the infrastructure needed, and the options available for designing a brand new search engine. Techniques are highlighted that improve relevance of results, discuss trade-offs to best utilize machine resources, and cover distributed processing concepts in this context. In particular, the authors delve into the topics of indexing phrases instead of terms, storage in memory vs. on disk, and data partitioning. Some thoughts on information organization for the newly emerging data-forms conclude the chapter.
  13. Shiri, A.: Powering search : the role of thesauri in new information environments (2012) 0.07
    0.06736527 = product of:
      0.20209579 = sum of:
        0.043577533 = weight(_text_:web in 1322) [ClassicSimilarity], result of:
          0.043577533 = score(doc=1322,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 1322, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1322)
        0.049355935 = weight(_text_:world in 1322) [ClassicSimilarity], result of:
          0.049355935 = score(doc=1322,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.36034414 = fieldWeight in 1322, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=1322)
        0.06558479 = weight(_text_:wide in 1322) [ClassicSimilarity], result of:
          0.06558479 = score(doc=1322,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.4153836 = fieldWeight in 1322, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1322)
        0.043577533 = weight(_text_:web in 1322) [ClassicSimilarity], result of:
          0.043577533 = score(doc=1322,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 1322, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1322)
      0.33333334 = coord(4/12)
    
    Content
    Thesauri : introduction and recent developments -- Thesauri in interactive information retrieval -- User-centered approach to the evaluation of thesauri : query formulation and expansion -- Thesauri in web-based search systems -- Thesaurus-based search and browsing functionalities in new thesaurus construction standards -- Design of search user interfaces for thesauri -- Design of user interfaces for multilingual and meta-thesauri -- User-centered evaluation of thesaurus-enhanced search user interfaces -- Guidelines for the design of thesaurus-enhanced search user interfaces -- Current trends and developments.
    LCSH
    World Wide Web
    Subject
    World Wide Web
  14. Huang, S.-L.; Lin, S.-C.; Chan, Y.-C.: Investigating effectiveness and user acceptance of semantic social tagging for knowledge sharing (2012) 0.07
    0.06704194 = product of:
      0.26816776 = sum of:
        0.21784876 = weight(_text_:tagging in 2732) [ClassicSimilarity], result of:
          0.21784876 = score(doc=2732,freq=14.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            1.0354816 = fieldWeight in 2732, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.046875 = fieldNorm(doc=2732)
        0.025159499 = weight(_text_:web in 2732) [ClassicSimilarity], result of:
          0.025159499 = score(doc=2732,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 2732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2732)
        0.025159499 = weight(_text_:web in 2732) [ClassicSimilarity], result of:
          0.025159499 = score(doc=2732,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 2732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2732)
      0.25 = coord(3/12)
    
    Abstract
    Social tagging systems enable users to assign arbitrary tags to various digital resources. However, they face vague-meaning problems when users retrieve or present resources with the keyword-based tags. In order to solve these problems, this study takes advantage of Semantic Web technology and the topological characteristics of knowledge maps to develop a system that comprises a semantic tagging mechanism and triple-pattern and visual searching mechanisms. A field experiment was conducted to evaluate the effectiveness and user acceptance of these mechanisms in a knowledge sharing context. The results show that the semantic social tagging system is more effective than a keyword-based system. The visualized knowledge map helps users capture an overview of the knowledge domain, reduce cognitive effort for the search, and obtain more enjoyment. Traditional keyword tagging with a keyword search still has the advantage of ease of use and the users had higher intention to use it. This study also proposes directions for future development of semantic social tagging systems.
    Theme
    Social tagging
  15. Derek Doran, D.; Gokhale, S.S.: ¬A classification framework for web robots (2012) 0.07
    0.06588599 = product of:
      0.26354396 = sum of:
        0.067092 = weight(_text_:web in 505) [ClassicSimilarity], result of:
          0.067092 = score(doc=505,freq=8.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5769126 = fieldWeight in 505, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=505)
        0.12935995 = weight(_text_:log in 505) [ClassicSimilarity], result of:
          0.12935995 = score(doc=505,freq=2.0), product of:
            0.22837062 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.035634913 = queryNorm
            0.5664474 = fieldWeight in 505, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0625 = fieldNorm(doc=505)
        0.067092 = weight(_text_:web in 505) [ClassicSimilarity], result of:
          0.067092 = score(doc=505,freq=8.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.5769126 = fieldWeight in 505, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=505)
      0.25 = coord(3/12)
    
    Abstract
    The behavior of modern web robots varies widely when they crawl for different purposes. In this article, we present a framework to classify these web robots from two orthogonal perspectives, namely, their functionality and the types of resources they consume. Applying the classification framework to a year-long access log from the UConn SoE web server, we present trends that point to significant differences in their crawling behavior.
  16. Liu, B.: Web data mining : exploring hyperlinks, contents, and usage data (2011) 0.07
    0.06585966 = product of:
      0.19757898 = sum of:
        0.060475912 = weight(_text_:web in 354) [ClassicSimilarity], result of:
          0.060475912 = score(doc=354,freq=26.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.520022 = fieldWeight in 354, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
        0.032903954 = weight(_text_:world in 354) [ClassicSimilarity], result of:
          0.032903954 = score(doc=354,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.24022943 = fieldWeight in 354, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
        0.043723192 = weight(_text_:wide in 354) [ClassicSimilarity], result of:
          0.043723192 = score(doc=354,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.2769224 = fieldWeight in 354, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
        0.060475912 = weight(_text_:web in 354) [ClassicSimilarity], result of:
          0.060475912 = score(doc=354,freq=26.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.520022 = fieldWeight in 354, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
      0.33333334 = coord(4/12)
    
    Abstract
    Web mining aims to discover useful information and knowledge from the Web hyperlink structure, page contents, and usage data. Although Web mining uses many conventional data mining techniques, it is not purely an application of traditional data mining due to the semistructured and unstructured nature of the Web data and its heterogeneity. It has also developed many of its own algorithms and techniques. Liu has written a comprehensive text on Web data mining. Key topics of structure mining, content mining, and usage mining are covered both in breadth and in depth. His book brings together all the essential concepts and algorithms from related areas such as data mining, machine learning, and text processing to form an authoritative and coherent text. The book offers a rich blend of theory and practice, addressing seminal research ideas, as well as examining the technology from a practical point of view. It is suitable for students, researchers and practitioners interested in Web mining both as a learning text and a reference book. Lecturers can readily use it for classes on data mining, Web mining, and Web search. Additional teaching materials such as lecture slides, datasets, and implemented algorithms are available online.
    Content
    Inhalt: 1. Introduction 2. Association Rules and Sequential Patterns 3. Supervised Learning 4. Unsupervised Learning 5. Partially Supervised Learning 6. Information Retrieval and Web Search 7. Social Network Analysis 8. Web Crawling 9. Structured Data Extraction: Wrapper Generation 10. Information Integration
    RSWK
    World Wide Web / Data Mining
    Subject
    World Wide Web / Data Mining
  17. Saabiyeh, N.: What is a good ontology semantic similarity measure that considers multiple inheritance cases of concepts? (2018) 0.07
    0.06550072 = product of:
      0.19650216 = sum of:
        0.050840456 = weight(_text_:web in 4530) [ClassicSimilarity], result of:
          0.050840456 = score(doc=4530,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.43716836 = fieldWeight in 4530, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
        0.040716566 = weight(_text_:world in 4530) [ClassicSimilarity], result of:
          0.040716566 = score(doc=4530,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 4530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
        0.05410469 = weight(_text_:wide in 4530) [ClassicSimilarity], result of:
          0.05410469 = score(doc=4530,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.342674 = fieldWeight in 4530, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
        0.050840456 = weight(_text_:web in 4530) [ClassicSimilarity], result of:
          0.050840456 = score(doc=4530,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.43716836 = fieldWeight in 4530, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4530)
      0.33333334 = coord(4/12)
    
    Abstract
    I need to measure semantic similarity between CSO ontology concepts, depending on Ontology structure (concept path, depth, least common subsumer (LCS) ...). CSO (Computer Science Ontology) is a large-scale ontology of research areas. A concepts in CSO may have multiple parents/super concepts (i.e. a concept may be a child of many other concepts), e.g. : (world wide web) is parent of (semantic web) (semantics) is parent of (semantic web) I found some measures that meet my needs, but the papers proposing these measures are not cited, so i got hesitated. I also found a measure that depends on weighted edges, but multiple inheritance (super concepts) is not considered..
  18. Tredinnick, L.: Each one of us was several : networks, rhizomes and Web organisms (2013) 0.06
    0.06459736 = product of:
      0.19379207 = sum of:
        0.056258354 = weight(_text_:web in 1364) [ClassicSimilarity], result of:
          0.056258354 = score(doc=1364,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.48375595 = fieldWeight in 1364, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1364)
        0.034899916 = weight(_text_:world in 1364) [ClassicSimilarity], result of:
          0.034899916 = score(doc=1364,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.25480178 = fieldWeight in 1364, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=1364)
        0.046375446 = weight(_text_:wide in 1364) [ClassicSimilarity], result of:
          0.046375446 = score(doc=1364,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.29372054 = fieldWeight in 1364, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1364)
        0.056258354 = weight(_text_:web in 1364) [ClassicSimilarity], result of:
          0.056258354 = score(doc=1364,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.48375595 = fieldWeight in 1364, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1364)
      0.33333334 = coord(4/12)
    
    Abstract
    This paper develops a conceptual analysis of hypertext and the World Wide Web by exploring the contrasting metaphors of the network and the rhizome. The idea of the network has influenced the conceptual thinking about both the web, and its wider socio-cultural influence. The paper develops an alternative description of the structure of hypertext and the web in terms of interrupted and dissipated energy flows. It concludes that the web should be considered not as a particular set of protocols and technological standards, nor as an interlinked set of technologically mediated services, but as a dynamic reorganisation of the socio-cultural system itself that at its inception has become associated with particular forms of technology, but which has no determinate boundaries, and which should properly be constituted in the spaces between technologies, and the spaces between persons.
  19. Next generation search engines : advanced models for information retrieval (2012) 0.06
    0.064427115 = product of:
      0.15462509 = sum of:
        0.034307953 = weight(_text_:tagging in 357) [ClassicSimilarity], result of:
          0.034307953 = score(doc=357,freq=2.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.163073 = fieldWeight in 357, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
        0.03922426 = weight(_text_:web in 357) [ClassicSimilarity], result of:
          0.03922426 = score(doc=357,freq=28.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3372827 = fieldWeight in 357, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
        0.014541632 = weight(_text_:world in 357) [ClassicSimilarity], result of:
          0.014541632 = score(doc=357,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.10616741 = fieldWeight in 357, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
        0.027326997 = weight(_text_:wide in 357) [ClassicSimilarity], result of:
          0.027326997 = score(doc=357,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.17307651 = fieldWeight in 357, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
        0.03922426 = weight(_text_:web in 357) [ClassicSimilarity], result of:
          0.03922426 = score(doc=357,freq=28.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3372827 = fieldWeight in 357, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
      0.41666666 = coord(5/12)
    
    Abstract
    The main goal of this book is to transfer new research results from the fields of advanced computer sciences and information science to the design of new search engines. The readers will have a better idea of the new trends in applied research. The achievement of relevant, organized, sorted, and workable answers- to name but a few - from a search is becoming a daily need for enterprises and organizations, and, to a greater extent, for anyone. It does not consist of getting access to structural information as in standard databases; nor does it consist of searching information strictly by way of a combination of key words. It goes far beyond that. Whatever its modality, the information sought should be identified by the topics it contains, that is to say by its textual, audio, video or graphical contents. This is not a new issue. However, recent technological advances have completely changed the techniques being used. New Web technologies, the emergence of Intranet systems and the abundance of information on the Internet have created the need for efficient search and information access tools.
    Recent technological progress in computer science, Web technologies, and constantly evolving information available on the Internet has drastically changed the landscape of search and access to information. Web search has significantly evolved in recent years. In the beginning, web search engines such as Google and Yahoo! were only providing search service over text documents. Aggregated search was one of the first steps to go beyond text search, and was the beginning of a new era for information seeking and retrieval. These days, new web search engines support aggregated search over a number of vertices, and blend different types of documents (e.g., images, videos) in their search results. New search engines employ advanced techniques involving machine learning, computational linguistics and psychology, user interaction and modeling, information visualization, Web engineering, artificial intelligence, distributed systems, social networks, statistical analysis, semantic analysis, and technologies over query sessions. Documents no longer exist on their own; they are connected to other documents, they are associated with users and their position in a social network, and they can be mapped onto a variety of ontologies. Similarly, retrieval tasks have become more interactive and are solidly embedded in a user's geospatial, social, and historical context. It is conjectured that new breakthroughs in information retrieval will not come from smarter algorithms that better exploit existing information sources, but from new retrieval algorithms that can intelligently use and combine new sources of contextual metadata.
    With the rapid growth of web-based applications, such as search engines, Facebook, and Twitter, the development of effective and personalized information retrieval techniques and of user interfaces is essential. The amount of shared information and of social networks has also considerably grown, requiring metadata for new sources of information, like Wikipedia and ODP. These metadata have to provide classification information for a wide range of topics, as well as for social networking sites like Twitter, and Facebook, each of which provides additional preferences, tagging information and social contexts. Due to the explosion of social networks and other metadata sources, it is an opportune time to identify ways to exploit such metadata in IR tasks such as user modeling, query understanding, and personalization, to name a few. Although the use of traditional metadata such as html text, web page titles, and anchor text is fairly well-understood, the use of category information, user behavior data, and geographical information is just beginning to be studied. This book is intended for scientists and decision-makers who wish to gain working knowledge about search engines in order to evaluate available solutions and to dialogue with software and data providers.
    Content
    Enthält die Beiträge: Das, A., A. Jain: Indexing the World Wide Web: the journey so far. Ke, W.: Decentralized search and the clustering paradox in large scale information networks. Roux, M.: Metadata for search engines: what can be learned from e-Sciences? Fluhr, C.: Crosslingual access to photo databases. Djioua, B., J.-P. Desclés u. M. Alrahabi: Searching and mining with semantic categories. Ghorbel, H., A. Bahri u. R. Bouaziz: Fuzzy ontologies building platform for Semantic Web: FOB platform. Lassalle, E., E. Lassalle: Semantic models in information retrieval. Berry, M.W., R. Esau u. B. Kiefer: The use of text mining techniques in electronic discovery for legal matters. Sleem-Amer, M., I. Bigorgne u. S. Brizard u.a.: Intelligent semantic search engines for opinion and sentiment mining. Hoeber, O.: Human-centred Web search.
    Vert, S.: Extensions of Web browsers useful to knowledge workers. Chen, L.-C.: Next generation search engine for the result clustering technology. Biskri, I., L. Rompré: Using association rules for query reformulation. Habernal, I., M. Konopík u. O. Rohlík: Question answering. Grau, B.: Finding answers to questions, in text collections or Web, in open domain or specialty domains. Berri, J., R. Benlamri: Context-aware mobile search engine. Bouidghaghen, O., L. Tamine: Spatio-temporal based personalization for mobile search. Chaudiron, S., M. Ihadjadene: Studying Web search engines from a user perspective: key concepts and main approaches. Karaman, F.: Artificial intelligence enabled search engines (AIESE) and the implications. Lewandowski, D.: A framework for evaluating the retrieval effectiveness of search engines.
  20. Kruschwitz, U.; Lungley, D.; Albakour, M-D.; Song, D.: Deriving query suggestions for site search (2013) 0.06
    0.06433566 = product of:
      0.25734264 = sum of:
        0.029650755 = weight(_text_:web in 1085) [ClassicSimilarity], result of:
          0.029650755 = score(doc=1085,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 1085, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1085)
        0.19804114 = weight(_text_:log in 1085) [ClassicSimilarity], result of:
          0.19804114 = score(doc=1085,freq=12.0), product of:
            0.22837062 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.035634913 = queryNorm
            0.86719185 = fieldWeight in 1085, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1085)
        0.029650755 = weight(_text_:web in 1085) [ClassicSimilarity], result of:
          0.029650755 = score(doc=1085,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 1085, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1085)
      0.25 = coord(3/12)
    
    Abstract
    Modern search engines have been moving away from simplistic interfaces that aimed at satisfying a user's need with a single-shot query. Interactive features are now integral parts of web search engines. However, generating good query modification suggestions remains a challenging issue. Query log analysis is one of the major strands of work in this direction. Although much research has been performed on query logs collected on the web as a whole, query log analysis to enhance search on smaller and more focused collections has attracted less attention, despite its increasing practical importance. In this article, we report on a systematic study of different query modification methods applied to a substantial query log collected on a local website that already uses an interactive search engine. We conducted experiments in which we asked users to assess the relevance of potential query modification suggestions that have been constructed using a range of log analysis methods and different baseline approaches. The experimental results demonstrate the usefulness of log analysis to extract query modification suggestions. Furthermore, our experiments demonstrate that a more fine-grained approach than grouping search requests into sessions allows for extraction of better refinement terms from query log files.

Types

  • a 1559
  • el 136
  • m 134
  • s 54
  • x 13
  • b 5
  • r 5
  • n 2
  • i 1
  • p 1
  • More… Less…

Themes

Subjects

Classifications