Search (38327 results, page 1 of 1917)

  1. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.20
    0.20460007 = sum of:
      0.060315862 = product of:
        0.18094759 = sum of:
          0.18094759 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
            0.18094759 = score(doc=692,freq=2.0), product of:
              0.38635254 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.04557113 = queryNorm
              0.46834838 = fieldWeight in 692, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=692)
        0.33333334 = coord(1/3)
      0.1413856 = product of:
        0.2827712 = sum of:
          0.2827712 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
            0.2827712 = score(doc=692,freq=2.0), product of:
              0.48297536 = queryWeight, product of:
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.04557113 = queryNorm
              0.5854775 = fieldWeight in 692, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.0390625 = fieldNorm(doc=692)
        0.5 = coord(1/2)
      0.0028986079 = product of:
        0.0057972157 = sum of:
          0.0057972157 = weight(_text_:a in 692) [ClassicSimilarity], result of:
            0.0057972157 = score(doc=692,freq=6.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.11032722 = fieldWeight in 692, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=692)
        0.5 = coord(1/2)
    
    Abstract
    What is the difference between Piaget's constructivism and Papert's "constructionism"? Beyond the mere play on the words, I think the distinction holds, and that integrating both views can enrich our understanding of how people learn and grow. Piaget's constructivism offers a window into what children are interested in, and able to achieve, at different stages of their development. The theory describes how children's ways of doing and thinking evolve over time, and under which circumstance children are more likely to let go of-or hold onto- their currently held views. Piaget suggests that children have very good reasons not to abandon their worldviews just because someone else, be it an expert, tells them they're wrong. Papert's constructionism, in contrast, focuses more on the art of learning, or 'learning to learn', and on the significance of making things in learning. Papert is interested in how learners engage in a conversation with [their own or other people's] artifacts, and how these conversations boost self-directed learning, and ultimately facilitate the construction of new knowledge. He stresses the importance of tools, media, and context in human development. Integrating both perspectives illuminates the processes by which individuals come to make sense of their experience, gradually optimizing their interactions with the world.
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
    Type
    a
  2. Hausser, R.: Grammatical disambiguation : the linear complexity hypothesis for natural language (2020) 0.15
    0.14869812 = product of:
      0.22304718 = sum of:
        0.2173671 = product of:
          0.4347342 = sum of:
            0.4347342 = weight(_text_:loves in 22) [ClassicSimilarity], result of:
              0.4347342 = score(doc=22,freq=4.0), product of:
                0.45969644 = queryWeight, product of:
                  10.087449 = idf(docFreq=4, maxDocs=44218)
                  0.04557113 = queryNorm
                0.9456984 = fieldWeight in 22, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  10.087449 = idf(docFreq=4, maxDocs=44218)
                  0.046875 = fieldNorm(doc=22)
          0.5 = coord(1/2)
        0.0056800875 = product of:
          0.011360175 = sum of:
            0.011360175 = weight(_text_:a in 22) [ClassicSimilarity], result of:
              0.011360175 = score(doc=22,freq=16.0), product of:
                0.05254565 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04557113 = queryNorm
                0.2161963 = fieldWeight in 22, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=22)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    DBS uses a strictly time-linear derivation order. Therefore the basic computational complexity degree of DBS is linear time. The only way to increase DBS complexity above linear is repeating ambiguity. In natural language, however, repeating ambiguity is prevented by grammatical disambiguation. A classic example of a grammatical ambiguity is the 'garden path' sentence The horse raced by the barn fell. The continuation horse+raced introduces an ambiguity between horse which raced and horse which was raced, leading to two parallel derivation strands up to The horse raced by the barn. Depending on whether the continuation is interpunctuation or a verb, they are grammatically disambiguated, resulting in unambiguous output. A repeated ambiguity occurs in The man who loves the woman who feeds Lucy who Peter loves., with who serving as subject or as object. These readings are grammatically disambiguated by continuing after who with a verb or a noun.
    Type
    a
  3. Ziyal, L.K.; Schneider, J.: We need to talk, AI : a comic essay on artificial intelligence (2019) 0.15
    0.14790508 = product of:
      0.2218576 = sum of:
        0.2173671 = product of:
          0.4347342 = sum of:
            0.4347342 = weight(_text_:loves in 5240) [ClassicSimilarity], result of:
              0.4347342 = score(doc=5240,freq=4.0), product of:
                0.45969644 = queryWeight, product of:
                  10.087449 = idf(docFreq=4, maxDocs=44218)
                  0.04557113 = queryNorm
                0.9456984 = fieldWeight in 5240, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  10.087449 = idf(docFreq=4, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5240)
          0.5 = coord(1/2)
        0.004490504 = product of:
          0.008981008 = sum of:
            0.008981008 = weight(_text_:a in 5240) [ClassicSimilarity], result of:
              0.008981008 = score(doc=5240,freq=10.0), product of:
                0.05254565 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04557113 = queryNorm
                0.1709182 = fieldWeight in 5240, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5240)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In 30 years, will robots do all the unpleasant work for us? Or will they subjugate us to become submissive slaves? The debates on how Artificial Intelligence (AI) will change our lives move between these extremes. There is no doubt that the change will be dramatic. Maybe now is just the right time to start interfering. This pioneering comic essay on AI invites you on an illustrated journey through the dimensions and implications of the groundbreaking technology. Discussing important chances and risks associated with AI, this work is a creative stimulus for insiders of the subject as well as an invitation for newbies to get informed and join the debate. With a doctorate in economics, Julia Schneider appreciates data and code as tools for solving complex puzzles - and loves comics as a medium for telling complex stories. Coming from the opposite direction, artist Lena Kadriye Ziyal loves encrypting complexity with associations and thereby expands the meaning of a theme with her perspective.
  4. Gödert, W.; Hubrich, J.; Boteram, F.: Thematische Recherche und Interoperabilität : Wege zur Optimierung des Zugriffs auf heterogen erschlossene Dokumente (2009) 0.12
    0.11706929 = product of:
      0.17560393 = sum of:
        0.1413856 = product of:
          0.2827712 = sum of:
            0.2827712 = weight(_text_:2c in 193) [ClassicSimilarity], result of:
              0.2827712 = score(doc=193,freq=2.0), product of:
                0.48297536 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.04557113 = queryNorm
                0.5854775 = fieldWeight in 193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=193)
          0.5 = coord(1/2)
        0.03421832 = sum of:
          0.0033470236 = weight(_text_:a in 193) [ClassicSimilarity], result of:
            0.0033470236 = score(doc=193,freq=2.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.06369744 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
          0.030871296 = weight(_text_:22 in 193) [ClassicSimilarity], result of:
            0.030871296 = score(doc=193,freq=2.0), product of:
              0.15958233 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04557113 = queryNorm
              0.19345059 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
      0.6666667 = coord(2/3)
    
    Source
    https://opus4.kobv.de/opus4-bib-info/frontdoor/index/index/searchtype/authorsearch/author/%22Hubrich%2C+Jessica%22/docId/703/start/0/rows/20
    Type
    a
  5. Dewey, M.: Decimal classification beginnings (1990) 0.12
    0.1153841 = product of:
      0.17307615 = sum of:
        0.09095219 = product of:
          0.27285656 = sum of:
            0.27285656 = weight(_text_:author's in 3554) [ClassicSimilarity], result of:
              0.27285656 = score(doc=3554,freq=2.0), product of:
                0.30624497 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04557113 = queryNorm
                0.8909749 = fieldWeight in 3554, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3554)
          0.33333334 = coord(1/3)
        0.082123965 = sum of:
          0.008032857 = weight(_text_:a in 3554) [ClassicSimilarity], result of:
            0.008032857 = score(doc=3554,freq=2.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.15287387 = fieldWeight in 3554, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.09375 = fieldNorm(doc=3554)
          0.07409111 = weight(_text_:22 in 3554) [ClassicSimilarity], result of:
            0.07409111 = score(doc=3554,freq=2.0), product of:
              0.15958233 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04557113 = queryNorm
              0.46428138 = fieldWeight in 3554, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=3554)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes the author's development of the Dewey Decimal Classification
    Date
    25.12.1995 22:28:43
    Type
    a
  6. Entlich, R.: FAQ: Image Search Engines (2001) 0.10
    0.10478672 = product of:
      0.15718007 = sum of:
        0.15370174 = product of:
          0.30740348 = sum of:
            0.30740348 = weight(_text_:loves in 155) [ClassicSimilarity], result of:
              0.30740348 = score(doc=155,freq=2.0), product of:
                0.45969644 = queryWeight, product of:
                  10.087449 = idf(docFreq=4, maxDocs=44218)
                  0.04557113 = queryNorm
                0.6687097 = fieldWeight in 155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.087449 = idf(docFreq=4, maxDocs=44218)
                  0.046875 = fieldNorm(doc=155)
          0.5 = coord(1/2)
        0.0034783294 = product of:
          0.006956659 = sum of:
            0.006956659 = weight(_text_:a in 155) [ClassicSimilarity], result of:
              0.006956659 = score(doc=155,freq=6.0), product of:
                0.05254565 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04557113 = queryNorm
                0.13239266 = fieldWeight in 155, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=155)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Everyone loves images. The web wasn't anything until images came along, then it was an overnight success. So how does one find a specific image on the web? By using one of a burgeoning number of image-focused search engines. These search engines are simply optimized versions of typical web indexes, with crawlers that go around sucking down web content and indexing it. But with image search engines, they focus on images only, and the web page text that may describe them. As information professionals, we know that this is a clumsy approach at best, but as the author puts it, until more sophisticated methods become available, the tools profiled here will "have to suffice." Seven search engines are thoroughly tested in this review article, with Google's Image Search (http://www.google.com/imghp?hl=en) being the highest rated
  7. Belayche, C.: ¬A propos de la classification de Dewey (1997) 0.08
    0.079536274 = product of:
      0.1193044 = sum of:
        0.06063479 = product of:
          0.18190438 = sum of:
            0.18190438 = weight(_text_:author's in 1171) [ClassicSimilarity], result of:
              0.18190438 = score(doc=1171,freq=2.0), product of:
                0.30624497 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04557113 = queryNorm
                0.59398323 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1171)
          0.33333334 = coord(1/3)
        0.058669616 = sum of:
          0.009275545 = weight(_text_:a in 1171) [ClassicSimilarity], result of:
            0.009275545 = score(doc=1171,freq=6.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.17652355 = fieldWeight in 1171, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=1171)
          0.04939407 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
            0.04939407 = score(doc=1171,freq=2.0), product of:
              0.15958233 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04557113 = queryNorm
              0.30952093 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1171)
      0.6666667 = coord(2/3)
    
    Abstract
    All classifications are based on ideologies and Dewey is marked by its author's origins in 19th century North America. Subsequent revisions indicate changed ways of understanding the world. Section 157 (psycho-pathology) is now included with 616.89 (mental troubles), reflecting the move to a genetic-based approach. Table 5 (racial, ethnic and national groups) is however unchanged, despite changing views on such categorisation
    Source
    Bulletin d'informations de l'Association des Bibliothecaires Francais. 1997, no.175, S.22-23
    Type
    a
  8. Shuttleworth, C.: Marot, Hofstadter, index (1998) 0.08
    0.079536274 = product of:
      0.1193044 = sum of:
        0.06063479 = product of:
          0.18190438 = sum of:
            0.18190438 = weight(_text_:author's in 4642) [ClassicSimilarity], result of:
              0.18190438 = score(doc=4642,freq=2.0), product of:
                0.30624497 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04557113 = queryNorm
                0.59398323 = fieldWeight in 4642, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4642)
          0.33333334 = coord(1/3)
        0.058669616 = sum of:
          0.009275545 = weight(_text_:a in 4642) [ClassicSimilarity], result of:
            0.009275545 = score(doc=4642,freq=6.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.17652355 = fieldWeight in 4642, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=4642)
          0.04939407 = weight(_text_:22 in 4642) [ClassicSimilarity], result of:
            0.04939407 = score(doc=4642,freq=2.0), product of:
              0.15958233 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04557113 = queryNorm
              0.30952093 = fieldWeight in 4642, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4642)
      0.6666667 = coord(2/3)
    
    Abstract
    Comments on Douglas Hofstadter's index to his book 'Le ton beau de Marot: in praise of the music of language'. Hofstadter took charge of the book design, typography, typesetting and copy-editing, and also compiled the index which covers 23 pages of 3 columnes and is set in a practically illegible 4-point. Although the index breaks all the rules of indexing, it is a masterly creation showing the author's industry, exhuberance and wit. Summarizes Hofstadter's own remarks on how creating the index gave hin new insights into what his book was essentially about
    Source
    Indexer. 21(1998) no.1, S.22-23
    Type
    a
  9. Jacobs, J.W.; Summers, E.; Ankersen, E.: Cyril: expanding the horizons of MARC21 (2004) 0.08
    0.07840155 = product of:
      0.11760232 = sum of:
        0.06063479 = product of:
          0.18190438 = sum of:
            0.18190438 = weight(_text_:author's in 4749) [ClassicSimilarity], result of:
              0.18190438 = score(doc=4749,freq=2.0), product of:
                0.30624497 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04557113 = queryNorm
                0.59398323 = fieldWeight in 4749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4749)
          0.33333334 = coord(1/3)
        0.056967523 = sum of:
          0.0075734504 = weight(_text_:a in 4749) [ClassicSimilarity], result of:
            0.0075734504 = score(doc=4749,freq=4.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.14413087 = fieldWeight in 4749, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=4749)
          0.04939407 = weight(_text_:22 in 4749) [ClassicSimilarity], result of:
            0.04939407 = score(doc=4749,freq=2.0), product of:
              0.15958233 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04557113 = queryNorm
              0.30952093 = fieldWeight in 4749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4749)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes the construction of the author's Perl program, Cyril, to add vernacular Russian (Cyrillic) characters to existing MARC records. The program takes advantage of the ALA-LC standards for Romanization to create character mappings that "de-transliterate" specified MARC fields. The creation of Cyril raises both linguistic and technical issues, which are thoroughly examined. Concludes by considering the implications for cataloging and authority control standards, as we move to a multilingual, multi-script bibliographic environment.
    Source
    Library hi tech. 22(2004) no.1, S.8-17
    Type
    a
  10. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.07830497 = product of:
      0.11745745 = sum of:
        0.07237904 = product of:
          0.2171371 = sum of:
            0.2171371 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.2171371 = score(doc=562,freq=2.0), product of:
                0.38635254 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04557113 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.04507841 = sum of:
          0.008032857 = weight(_text_:a in 562) [ClassicSimilarity], result of:
            0.008032857 = score(doc=562,freq=8.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.15287387 = fieldWeight in 562, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
          0.037045553 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.037045553 = score(doc=562,freq=2.0), product of:
              0.15958233 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04557113 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
      0.6666667 = coord(2/3)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Type
    a
  11. Capps, M.; Ladd, B.; Stotts, D.: Enhanced graph models in the Web : multi-client, multi-head, multi-tail browsing (1996) 0.07
    0.06959425 = product of:
      0.104391366 = sum of:
        0.053055447 = product of:
          0.15916634 = sum of:
            0.15916634 = weight(_text_:author's in 5860) [ClassicSimilarity], result of:
              0.15916634 = score(doc=5860,freq=2.0), product of:
                0.30624497 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04557113 = queryNorm
                0.51973534 = fieldWeight in 5860, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5860)
          0.33333334 = coord(1/3)
        0.051335916 = sum of:
          0.008116102 = weight(_text_:a in 5860) [ClassicSimilarity], result of:
            0.008116102 = score(doc=5860,freq=6.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.1544581 = fieldWeight in 5860, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5860)
          0.043219812 = weight(_text_:22 in 5860) [ClassicSimilarity], result of:
            0.043219812 = score(doc=5860,freq=2.0), product of:
              0.15958233 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04557113 = queryNorm
              0.2708308 = fieldWeight in 5860, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5860)
      0.6666667 = coord(2/3)
    
    Abstract
    Richer graph models permit authors to 'program' the browsing behaviour they want WWW readers to see by turning the hypertext into a hyperprogram with specific semantics. Multiple browsing streams can be started under the author's control and then kept in step through the synchronization mechanisms provided by the graph model. Adds a Semantic Web Graph Layer (SWGL) which allows dynamic interpretation of link and node structures according to graph models. Details the SWGL and its architecture, some sample protocol implementations, and the latest extensions to MHTML
    Date
    1. 8.1996 22:08:06
    Type
    a
  12. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.07
    0.06832848 = product of:
      0.10249271 = sum of:
        0.09650537 = product of:
          0.28951612 = sum of:
            0.28951612 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.28951612 = score(doc=230,freq=2.0), product of:
                0.38635254 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04557113 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.0059873387 = product of:
          0.011974677 = sum of:
            0.011974677 = weight(_text_:a in 230) [ClassicSimilarity], result of:
              0.011974677 = score(doc=230,freq=10.0), product of:
                0.05254565 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04557113 = queryNorm
                0.22789092 = fieldWeight in 230, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this lecture I intend to challenge those who uphold a monist or even a dualist view of the universe; and I will propose, instead, a pluralist view. I will propose a view of the universe that recognizes at least three different but interacting sub-universes.
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
    Type
    a
  13. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.07
    0.066121995 = product of:
      0.09918299 = sum of:
        0.09650537 = product of:
          0.28951612 = sum of:
            0.28951612 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.28951612 = score(doc=140,freq=2.0), product of:
                0.38635254 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04557113 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.33333334 = coord(1/3)
        0.002677619 = product of:
          0.005355238 = sum of:
            0.005355238 = weight(_text_:a in 140) [ClassicSimilarity], result of:
              0.005355238 = score(doc=140,freq=2.0), product of:
                0.05254565 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04557113 = queryNorm
                0.10191591 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
    Type
    a
  14. Zhang, Y.: Developing a holistic model for digital library evaluation (2010) 0.06
    0.06348181 = product of:
      0.09522271 = sum of:
        0.045476094 = product of:
          0.13642828 = sum of:
            0.13642828 = weight(_text_:author's in 2360) [ClassicSimilarity], result of:
              0.13642828 = score(doc=2360,freq=2.0), product of:
                0.30624497 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04557113 = queryNorm
                0.44548744 = fieldWeight in 2360, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2360)
          0.33333334 = coord(1/3)
        0.049746618 = sum of:
          0.0127010625 = weight(_text_:a in 2360) [ClassicSimilarity], result of:
            0.0127010625 = score(doc=2360,freq=20.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.24171482 = fieldWeight in 2360, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2360)
          0.037045553 = weight(_text_:22 in 2360) [ClassicSimilarity], result of:
            0.037045553 = score(doc=2360,freq=2.0), product of:
              0.15958233 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04557113 = queryNorm
              0.23214069 = fieldWeight in 2360, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2360)
      0.6666667 = coord(2/3)
    
    Abstract
    This article reports the author's recent research in developing a holistic model for various levels of digital library (DL) evaluation in which perceived important criteria from heterogeneous stakeholder groups are organized and presented. To develop such a model, the author applied a three-stage research approach: exploration, confirmation, and verification. During the exploration stage, a literature review was conducted followed by an interview, along with a card sorting technique, to collect important criteria perceived by DL experts. Then the criteria identified were used for developing an online survey during the confirmation stage. Survey respondents (431 in total) from 22 countries rated the importance of the criteria. A holistic DL evaluation model was constructed using statistical techniques. Eventually, the verification stage was devised to test the reliability of the model in the context of searching and evaluating an operational DL. The proposed model fills two lacunae in the DL domain: (a) the lack of a comprehensive and flexible framework to guide and benchmark evaluations, and (b) the uncertainty about what divergence exists among heterogeneous DL stakeholders, including general users.
    Type
    a
  15. Castle, C.: Getting the central RDM message across : a case study of central versus discipline-specific Research Data Services (RDS) at the University of Cambridge (2019) 0.06
    0.06300431 = product of:
      0.09450646 = sum of:
        0.053594094 = product of:
          0.16078228 = sum of:
            0.16078228 = weight(_text_:author's in 5491) [ClassicSimilarity], result of:
              0.16078228 = score(doc=5491,freq=4.0), product of:
                0.30624497 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04557113 = queryNorm
                0.52501196 = fieldWeight in 5491, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5491)
          0.33333334 = coord(1/3)
        0.040912367 = sum of:
          0.010041072 = weight(_text_:a in 5491) [ClassicSimilarity], result of:
            0.010041072 = score(doc=5491,freq=18.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.19109234 = fieldWeight in 5491, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5491)
          0.030871296 = weight(_text_:22 in 5491) [ClassicSimilarity], result of:
            0.030871296 = score(doc=5491,freq=2.0), product of:
              0.15958233 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04557113 = queryNorm
              0.19345059 = fieldWeight in 5491, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5491)
      0.6666667 = coord(2/3)
    
    Abstract
    RDS are usually cross-disciplinary, centralised services, which are increasingly provided at a university by the academic library and in collaboration with other RDM stakeholders, such as the Research Office. At research-intensive universities, research data is generated in a wide range of disciplines and sub-disciplines. This paper will discuss how providing discipline-specific RDM support is approached by such universities and academic libraries, and the advantages and disadvantages of these central and discipline-specific approaches. A descriptive case study on the author's experiences of collaborating with a central RDS at the University of Cambridge, as a subject librarian embedded in an academic department, is a major component of this paper. The case study describes how centralised RDM services offered by the Office of Scholarly Communication (OSC) have been adapted to meet discipline-specific needs in the Department of Chemistry. It will introduce the department and the OSC, and describe the author's role in delivering RDM training, as well as the Data Champions programme, and their membership of the RDM Project Group. It will describe the outcomes of this collaboration for the Department of Chemistry, and for the centralised service. Centralised and discipline-specific approaches to RDS provision have their own advantages and disadvantages. Supporting the discipline-specific RDM needs of researchers is proving particularly challenging for universities to address sustainably: it requires adequate financial resources and staff skilled (or re-skilled) in RDM. A mixed approach is the most desirable, cost-effective way of providing RDS, but this still has constraints.
    Date
    7. 9.2019 21:30:22
    Type
    a
  16. Hjoerland, B.: ¬The importance of theories of knowledge : indexing and information retrieval as an example (2011) 0.06
    0.061573233 = product of:
      0.09235985 = sum of:
        0.045476094 = product of:
          0.13642828 = sum of:
            0.13642828 = weight(_text_:author's in 4359) [ClassicSimilarity], result of:
              0.13642828 = score(doc=4359,freq=2.0), product of:
                0.30624497 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04557113 = queryNorm
                0.44548744 = fieldWeight in 4359, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4359)
          0.33333334 = coord(1/3)
        0.046883754 = sum of:
          0.0098382 = weight(_text_:a in 4359) [ClassicSimilarity], result of:
            0.0098382 = score(doc=4359,freq=12.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.18723148 = fieldWeight in 4359, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4359)
          0.037045553 = weight(_text_:22 in 4359) [ClassicSimilarity], result of:
            0.037045553 = score(doc=4359,freq=2.0), product of:
              0.15958233 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04557113 = queryNorm
              0.23214069 = fieldWeight in 4359, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4359)
      0.6666667 = coord(2/3)
    
    Abstract
    A recent study in information science (IS), raises important issues concerning the value of human indexing and basic theories of indexing and information retrieval, as well as the use of quantitative and qualitative approaches in IS and the underlying theories of knowledge informing the field. The present article uses L&E as the point of departure for demonstrating in what way more social and interpretative understandings may provide fruitful improvements for research in indexing, knowledge organization, and information retrieval. The artcle is motivated by the observation that philosophical contributions tend to be ignored in IS if they are not directly formed as criticisms or invitations to dialogs. It is part of the author's ongoing publication of articles about philosophical issues in IS and it is intended to be followed by analyzes of other examples of contributions to core issues in IS. Although it is formulated as a criticism of a specific paper, it should be seen as part of a general discussion of the philosophical foundation of IS and as a support to the emerging social paradigm in this field.
    Date
    17. 3.2011 19:22:55
    Type
    a
  17. Bates, M.J.: Information science at the University of California at Berkeley in the 1960s : a memoir of student days (2004) 0.06
    0.060737193 = product of:
      0.09110579 = sum of:
        0.08575055 = product of:
          0.25725165 = sum of:
            0.25725165 = weight(_text_:author's in 7246) [ClassicSimilarity], result of:
              0.25725165 = score(doc=7246,freq=4.0), product of:
                0.30624497 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04557113 = queryNorm
                0.84001917 = fieldWeight in 7246, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7246)
          0.33333334 = coord(1/3)
        0.005355238 = product of:
          0.010710476 = sum of:
            0.010710476 = weight(_text_:a in 7246) [ClassicSimilarity], result of:
              0.010710476 = score(doc=7246,freq=8.0), product of:
                0.05254565 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04557113 = queryNorm
                0.20383182 = fieldWeight in 7246, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7246)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The author's experiences as a master's and doctoral student at the University of California at Berkeley School of Library and Information Studies during a formative period in the history of information science, 1966-71, are described. The relationship between documentation and information science as experienced in that program is discussed, as well as the various influences, both social and intellectual, that shaped the author's understanding of information science at that time.
    Type
    a
  18. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.06
    0.0594187 = product of:
      0.08912805 = sum of:
        0.08444221 = product of:
          0.25332662 = sum of:
            0.25332662 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.25332662 = score(doc=306,freq=2.0), product of:
                0.38635254 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04557113 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.004685833 = product of:
          0.009371666 = sum of:
            0.009371666 = weight(_text_:a in 306) [ClassicSimilarity], result of:
              0.009371666 = score(doc=306,freq=8.0), product of:
                0.05254565 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04557113 = queryNorm
                0.17835285 = fieldWeight in 306, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Although service-oriented architectures go a long way toward providing interoperability in distributed, heterogeneous environments, managing semantic differences in such environments remains a challenge. We give an overview of the issue of semantic interoperability (integration), provide a semantic characterization of services, and discuss the role of ontologies. Then we analyze four basic models of semantic interoperability that differ in respect to their mapping between service descriptions and ontologies and in respect to where the evaluation of the integration logic is performed. We also provide some guidelines for selecting one of the possible interoperability models.
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
    Type
    a
  19. Brueggeman, P.: ¬19 tips for enhancing CD-ROM performance (1993) 0.06
    0.058801156 = product of:
      0.08820173 = sum of:
        0.045476094 = product of:
          0.13642828 = sum of:
            0.13642828 = weight(_text_:author's in 3749) [ClassicSimilarity], result of:
              0.13642828 = score(doc=3749,freq=2.0), product of:
                0.30624497 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04557113 = queryNorm
                0.44548744 = fieldWeight in 3749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3749)
          0.33333334 = coord(1/3)
        0.04272564 = sum of:
          0.0056800875 = weight(_text_:a in 3749) [ClassicSimilarity], result of:
            0.0056800875 = score(doc=3749,freq=4.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.10809815 = fieldWeight in 3749, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=3749)
          0.037045553 = weight(_text_:22 in 3749) [ClassicSimilarity], result of:
            0.037045553 = score(doc=3749,freq=2.0), product of:
              0.15958233 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04557113 = queryNorm
              0.23214069 = fieldWeight in 3749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3749)
      0.6666667 = coord(2/3)
    
    Abstract
    Lists 19 tips, based on the author's experience with IBM compatible CD-ROM workstations, designed to yield improved performance by more efficient use of the computer hardware, particularly the hard disc. The tips also apply to Macintosh workstations. Covers: optimising files; placing CD-ROM software at the front of the hard disc; using disc caching software; use of the autopark facility; checking the interleave; browsing for orphan files; using CHKDSK/f command; low level formatting of hard disc; purchasing of microcomputers with large RAM caches; stepping up in MHZ and CPU; using more memory and memory management software; putting full path before software is loaded by AUTOEXEC.BAT and batch files; REM software specific lines in AUTOEXEC.BAT and CONFIG.SYS; feeding paper tp printer from a box on the flor; booting up in turbo mode and with Num Lock off; speeding up cursor keys; and protecting system enhancements
    Source
    CD-ROM professional. 6(1993) no.1, S.17-22
    Type
    a
  20. Alexandre Hannud Abdo, A.H. => Hannud Abdo, A.: 0.05
    0.05474931 = product of:
      0.16424793 = sum of:
        0.16424793 = sum of:
          0.016065715 = weight(_text_:a in 617) [ClassicSimilarity], result of:
            0.016065715 = score(doc=617,freq=2.0), product of:
              0.05254565 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04557113 = queryNorm
              0.30574775 = fieldWeight in 617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.1875 = fieldNorm(doc=617)
          0.14818221 = weight(_text_:22 in 617) [ClassicSimilarity], result of:
            0.14818221 = score(doc=617,freq=2.0), product of:
              0.15958233 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04557113 = queryNorm
              0.92856276 = fieldWeight in 617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.1875 = fieldNorm(doc=617)
      0.33333334 = coord(1/3)
    
    Date
    7. 6.2022 19:22:19

Authors

Languages

Types

Themes

Subjects

Classifications