Search (7695 results, page 17 of 385)

  1. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.04
    0.044192746 = product of:
      0.08838549 = sum of:
        0.08838549 = sum of:
          0.053092297 = weight(_text_:web in 4553) [ClassicSimilarity], result of:
            0.053092297 = score(doc=4553,freq=6.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.3122631 = fieldWeight in 4553, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
          0.03529319 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
            0.03529319 = score(doc=4553,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 4553, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
      0.5 = coord(1/2)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
    Theme
    Semantic Web
  2. Sachse, J.: ¬The influence of snippet length on user behavior in mobile web search (2019) 0.04
    0.044192746 = product of:
      0.08838549 = sum of:
        0.08838549 = sum of:
          0.053092297 = weight(_text_:web in 5493) [ClassicSimilarity], result of:
            0.053092297 = score(doc=5493,freq=6.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.3122631 = fieldWeight in 5493, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5493)
          0.03529319 = weight(_text_:22 in 5493) [ClassicSimilarity], result of:
            0.03529319 = score(doc=5493,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 5493, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5493)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Web search is more and more moving into mobile contexts. However, screen size of mobile devices is limited and search engine result pages face a trade-off between offering informative snippets and optimal use of space. One factor clearly influencing this trade-off is snippet length. The purpose of this paper is to find out what snippet size to use in mobile web search. Design/methodology/approach For this purpose, an eye-tracking experiment was conducted showing participants search interfaces with snippets of one, three or five lines on a mobile device to analyze 17 dependent variables. In total, 31 participants took part in the study. Each of the participants solved informational and navigational tasks. Findings Results indicate a strong influence of page fold on scrolling behavior and attention distribution across search results. Regardless of query type, short snippets seem to provide too little information about the result, so that search performance and subjective measures are negatively affected. Long snippets of five lines lead to better performance than medium snippets for navigational queries, but to worse performance for informational queries. Originality/value Although space in mobile search is limited, this study shows that longer snippets improve usability and user experience. It further emphasizes that page fold plays a stronger role in mobile than in desktop search for attention distribution.
    Date
    20. 1.2015 18:30:22
  3. Shoffner, M.; Greenberg, J.; Kramer-Duffield, J.; Woodbury, D.: Web 2.0 semantic systems : collaborative learning in science (2008) 0.04
    0.044150814 = product of:
      0.08830163 = sum of:
        0.08830163 = sum of:
          0.06006708 = weight(_text_:web in 2661) [ClassicSimilarity], result of:
            0.06006708 = score(doc=2661,freq=12.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.35328537 = fieldWeight in 2661, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=2661)
          0.028234553 = weight(_text_:22 in 2661) [ClassicSimilarity], result of:
            0.028234553 = score(doc=2661,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.15476047 = fieldWeight in 2661, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2661)
      0.5 = coord(1/2)
    
    Abstract
    The basic goal of education within a discipline is to transform a novice into an expert. This entails moving the novice toward the "semantic space" that the expert inhabits-the space of concepts, meanings, vocabularies, and other intellectual constructs that comprise the discipline. Metadata is significant to this goal in digitally mediated education environments. Encoding the experts' semantic space not only enables the sharing of semantics among discipline scientists, but also creates an environment that bridges the semantic gap between the common vocabulary of the novice and the granular descriptive language of the seasoned scientist (Greenberg, et al, 2005). Developments underlying the Semantic Web, where vocabularies are formalized in the Web Ontology Language (OWL), and Web 2.0 approaches of user-generated folksonomies provide an infrastructure for linking vocabulary systems and promoting group learning via metadata literacy. Group learning is a pedagogical approach to teaching that harnesses the phenomenon of "collective intelligence" to increase learning by means of collaboration. Learning a new semantic system can be daunting for a novice, and yet it is integral to advance one's knowledge in a discipline and retain interest. These ideas are key to the "BOT 2.0: Botany through Web 2.0, the Memex and Social Learning" project (Bot 2.0).72 Bot 2.0 is a collaboration involving the North Carolina Botanical Garden, the UNC SILS Metadata Research center, and the Renaissance Computing Institute (RENCI). Bot 2.0 presents a curriculum utilizing a memex as a way for students to link and share digital information, working asynchronously in an environment beyond the traditional classroom. Our conception of a memex is not a centralized black box but rather a flexible, distributed framework that uses the most salient and easiest-to-use collaborative platforms (e.g., Facebook, Flickr, wiki and blog technology) for personal information management. By meeting students "where they live" digitally, we hope to attract students to the study of botanical science. A key aspect is to teach students scientific terminology and about the value of metadata, an inherent function in several of the technologies and in the instructional approach we are utilizing. This poster will report on a study examining the value of both folksonomies and taxonomies for post-secondary college students learning plant identification. Our data is drawn from a curriculum involving a virtual independent learning portion and a "BotCamp" weekend at UNC, where students work with digital plan specimens that they have captured. Results provide some insight into the importance of collaboration and shared vocabulary for gaining confidence and for student progression from novice to expert in botany.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Theme
    Semantic Web
  4. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.04
    0.044150814 = product of:
      0.08830163 = sum of:
        0.08830163 = sum of:
          0.06006708 = weight(_text_:web in 168) [ClassicSimilarity], result of:
            0.06006708 = score(doc=168,freq=12.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.35328537 = fieldWeight in 168, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
          0.028234553 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
            0.028234553 = score(doc=168,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.15476047 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    LCSH
    World wide web
    RSWK
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Subject
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    World wide web
  5. Ziegler, C.: Smartes Chaos : Web 2.0 versus Semantic Web (2006) 0.04
    0.04334968 = product of:
      0.08669936 = sum of:
        0.08669936 = product of:
          0.17339872 = sum of:
            0.17339872 = weight(_text_:web in 4868) [ClassicSimilarity], result of:
              0.17339872 = score(doc=4868,freq=16.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                1.019847 = fieldWeight in 4868, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4868)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Web 2.0 und Semantic Web schicken sich gleichermaßen an, dem klassischen WWW neuen Lebensatem einzuhauchen. Dabei könnte Web 2.0 sich zu genau dem entwickeln, was das Semantic Web sein wollte, nie wurde und womöglich niemals sein kann
    Object
    Web 2.0
    Theme
    Semantic Web
  6. Bravo, B.R. -> Rodriguez Bravo, B.: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 1) [ClassicSimilarity], result of:
              0.16940731 = score(doc=1,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 1, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=1)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.2007 19:43:53
  7. Wal, T. Vander -> Vander Wal, T.: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 580) [ClassicSimilarity], result of:
              0.16940731 = score(doc=580,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=580)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2009 9:15:50
  8. #778 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 777) [ClassicSimilarity], result of:
              0.16940731 = score(doc=777,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 777, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=777)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 2.1998 9:45:22
  9. Álvarez, E. Corera- -> Corera-Álvarez, E.: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 942) [ClassicSimilarity], result of:
              0.16940731 = score(doc=942,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 942, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=942)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8. 2.2008 12:22:10
  10. Alvis, R. de -> Seidler-de Alvis, R.: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 1080) [ClassicSimilarity], result of:
              0.16940731 = score(doc=1080,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 1080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=1080)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2.10.1996 17:14:22
  11. Castillo, M. Davey => Davey Castillo, M.: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 2447) [ClassicSimilarity], result of:
              0.16940731 = score(doc=2447,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 2447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=2447)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2018 18:02:42
  12. Rao, I.K.R. -> Ravichandra Rao, I.K.: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 2795) [ClassicSimilarity], result of:
              0.16940731 = score(doc=2795,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 2795, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=2795)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    29. 2.2008 18:14:22
  13. Van Doorn, M. -> Doorn, M. van: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
              0.16940731 = score(doc=4186,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 4186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=4186)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2010 19:49:03
  14. Sagar, S.A.Dewan => Dewan Sagar, S.A.: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 4501) [ClassicSimilarity], result of:
              0.16940731 = score(doc=4501,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 4501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=4501)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    3. 5.2022 13:22:36
  15. Ibarra, A.M.T. -> Talavera Ibarra, A.M.: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 2351) [ClassicSimilarity], result of:
              0.16940731 = score(doc=2351,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 2351, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=2351)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    3. 8.2006 14:22:26
  16. Pfister, D. Schmidt- => Schmidt-Pfister, D.: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 5982) [ClassicSimilarity], result of:
              0.16940731 = score(doc=5982,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 5982, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=5982)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 1.2018 13:22:54
  17. Gastaminza, F. del Valle -> Valle Gastaminza, F. del: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 2528) [ClassicSimilarity], result of:
              0.16940731 = score(doc=2528,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 2528, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=2528)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    29. 8.2004 16:22:16
  18. Álvarez, V. Pachón -> Pachón Álvarez, V.: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 282) [ClassicSimilarity], result of:
              0.16940731 = score(doc=282,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 282, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=282)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    24. 6.2012 14:35:22
  19. Silva, A.C. Santana => Santana Silva, A.C.: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 2805) [ClassicSimilarity], result of:
              0.16940731 = score(doc=2805,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 2805, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=2805)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    17. 3.2019 18:22:57
  20. Thompson, B. Swire- => Swire-Thompson, B.: 0.04
    0.042351827 = product of:
      0.084703654 = sum of:
        0.084703654 = product of:
          0.16940731 = sum of:
            0.16940731 = weight(_text_:22 in 4403) [ClassicSimilarity], result of:
              0.16940731 = score(doc=4403,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.92856276 = fieldWeight in 4403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.1875 = fieldNorm(doc=4403)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 9.2022 19:22:05

Languages

Types

  • a 6288
  • m 792
  • el 553
  • s 320
  • x 107
  • b 43
  • i 40
  • r 40
  • n 15
  • p 9
  • ? 8
  • d 3
  • u 2
  • z 2
  • A 1
  • EL 1
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications