Search (274 results, page 1 of 14)

  • × type_ss:"x"
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.63
    0.6349766 = product of:
      1.2699533 = sum of:
        0.097688705 = product of:
          0.2930661 = sum of:
            0.2930661 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.2930661 = score(doc=973,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.2930661 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.2930661 = score(doc=973,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.2930661 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.2930661 = score(doc=973,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.2930661 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.2930661 = score(doc=973,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.2930661 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.2930661 = score(doc=973,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.5 = coord(5/10)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.57
    0.5686467 = product of:
      0.7108084 = sum of:
        0.032562904 = product of:
          0.097688705 = sum of:
            0.097688705 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.097688705 = score(doc=5820,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.13815269 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13815269 = score(doc=5820,freq=4.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13815269 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13815269 = score(doc=5820,freq=4.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.011846555 = weight(_text_:information in 5820) [ClassicSimilarity], result of:
          0.011846555 = score(doc=5820,freq=16.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.21943474 = fieldWeight in 5820, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.024872115 = weight(_text_:retrieval in 5820) [ClassicSimilarity], result of:
          0.024872115 = score(doc=5820,freq=8.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.26736724 = fieldWeight in 5820, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13815269 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13815269 = score(doc=5820,freq=4.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.088916 = weight(_text_:ranking in 5820) [ClassicSimilarity], result of:
          0.088916 = score(doc=5820,freq=10.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.5345266 = fieldWeight in 5820, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13815269 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13815269 = score(doc=5820,freq=4.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.8 = coord(8/10)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.44
    0.43649817 = product of:
      0.62356883 = sum of:
        0.14653306 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14653306 = score(doc=563,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14653306 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14653306 = score(doc=563,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.0062825847 = weight(_text_:information in 563) [ClassicSimilarity], result of:
          0.0062825847 = score(doc=563,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.116372846 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.018654086 = weight(_text_:retrieval in 563) [ClassicSimilarity], result of:
          0.018654086 = score(doc=563,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.20052543 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14653306 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14653306 = score(doc=563,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14653306 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14653306 = score(doc=563,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.012499932 = product of:
          0.024999864 = sum of:
            0.024999864 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.024999864 = score(doc=563,freq=2.0), product of:
                0.107692726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030753274 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.7 = coord(7/10)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.40
    0.39945948 = product of:
      0.49932435 = sum of:
        0.032562904 = product of:
          0.097688705 = sum of:
            0.097688705 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.097688705 = score(doc=701,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.097688705 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.097688705 = score(doc=701,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.097688705 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.097688705 = score(doc=701,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.012565169 = weight(_text_:information in 701) [ClassicSimilarity], result of:
          0.012565169 = score(doc=701,freq=18.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.23274568 = fieldWeight in 701, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.046531465 = weight(_text_:retrieval in 701) [ClassicSimilarity], result of:
          0.046531465 = score(doc=701,freq=28.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.5001983 = fieldWeight in 701, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.097688705 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.097688705 = score(doc=701,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.097688705 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.097688705 = score(doc=701,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.01690999 = product of:
          0.03381998 = sum of:
            0.03381998 = weight(_text_:evaluation in 701) [ClassicSimilarity], result of:
              0.03381998 = score(doc=701,freq=4.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.2621688 = fieldWeight in 701, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.8 = coord(8/10)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.32
    0.32193077 = product of:
      0.53655124 = sum of:
        0.04070363 = product of:
          0.12211088 = sum of:
            0.12211088 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.12211088 = score(doc=4997,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.12211088 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12211088 = score(doc=4997,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.12211088 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12211088 = score(doc=4997,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.007404097 = weight(_text_:information in 4997) [ClassicSimilarity], result of:
          0.007404097 = score(doc=4997,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.13714671 = fieldWeight in 4997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.12211088 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12211088 = score(doc=4997,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.12211088 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12211088 = score(doc=4997,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.6 = coord(6/10)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
    Imprint
    Trento : University / Department of information engineering and computer science
  6. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.32
    0.32193077 = product of:
      0.53655124 = sum of:
        0.04070363 = product of:
          0.12211088 = sum of:
            0.12211088 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.12211088 = score(doc=1000,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.12211088 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12211088 = score(doc=1000,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12211088 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12211088 = score(doc=1000,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.007404097 = weight(_text_:information in 1000) [ClassicSimilarity], result of:
          0.007404097 = score(doc=1000,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.13714671 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12211088 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12211088 = score(doc=1000,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12211088 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12211088 = score(doc=1000,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.6 = coord(6/10)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  7. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.26
    0.26457357 = product of:
      0.52914715 = sum of:
        0.04070363 = product of:
          0.12211088 = sum of:
            0.12211088 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.12211088 = score(doc=4388,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.12211088 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.12211088 = score(doc=4388,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.12211088 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.12211088 = score(doc=4388,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.12211088 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.12211088 = score(doc=4388,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.12211088 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.12211088 = score(doc=4388,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.5 = coord(5/10)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  8. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.26
    0.26457357 = product of:
      0.52914715 = sum of:
        0.04070363 = product of:
          0.12211088 = sum of:
            0.12211088 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.12211088 = score(doc=855,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.12211088 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.12211088 = score(doc=855,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.12211088 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.12211088 = score(doc=855,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.12211088 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.12211088 = score(doc=855,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.12211088 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.12211088 = score(doc=855,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.5 = coord(5/10)
    
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  9. Stets, P.: Ranking-Algorithmen im Information Retrieval (1994) 0.07
    0.06766667 = product of:
      0.22555555 = sum of:
        0.01675356 = weight(_text_:information in 7476) [ClassicSimilarity], result of:
          0.01675356 = score(doc=7476,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.3103276 = fieldWeight in 7476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=7476)
        0.04974423 = weight(_text_:retrieval in 7476) [ClassicSimilarity], result of:
          0.04974423 = score(doc=7476,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.5347345 = fieldWeight in 7476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=7476)
        0.15905777 = weight(_text_:ranking in 7476) [ClassicSimilarity], result of:
          0.15905777 = score(doc=7476,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.95619017 = fieldWeight in 7476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.125 = fieldNorm(doc=7476)
      0.3 = coord(3/10)
    
  10. Líska, M.: Evaluation of mathematics retrieval (2013) 0.03
    0.027544294 = product of:
      0.09181431 = sum of:
        0.0073296824 = weight(_text_:information in 1653) [ClassicSimilarity], result of:
          0.0073296824 = score(doc=1653,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.13576832 = fieldWeight in 1653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1653)
        0.037694797 = weight(_text_:retrieval in 1653) [ClassicSimilarity], result of:
          0.037694797 = score(doc=1653,freq=6.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.40520695 = fieldWeight in 1653, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1653)
        0.046789825 = product of:
          0.09357965 = sum of:
            0.09357965 = weight(_text_:evaluation in 1653) [ClassicSimilarity], result of:
              0.09357965 = score(doc=1653,freq=10.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.7254192 = fieldWeight in 1653, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1653)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Abstract
    The thesis deals with the evaluation of mathematics information retrieval (IR). It gives an overview of the history of regular IR evaluation, initiatives that are engaged in this field of research as well as most common methods and measures used for evaluation. The findings are applied to the specifics of mathematics retrieval. This thesis also summarizes the state-of-the-art of MIaS math search system, which is already being used in an international web portal. Latest developments aiming towards the second version of the system are described. In addition to participating in the international evaluation conference and workshop, MIaS is tested for effectiveness and efficiency in this work. Measured performance indicators are evaluated and future work is suggested accordingly.
  11. Grün, S.: Mehrwortbegriffe und Latent Semantic Analysis : Bewertung automatisch extrahierter Mehrwortgruppen mit LSA (2017) 0.03
    0.026130417 = product of:
      0.087101385 = sum of:
        0.010470974 = weight(_text_:information in 3954) [ClassicSimilarity], result of:
          0.010470974 = score(doc=3954,freq=8.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.19395474 = fieldWeight in 3954, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3954)
        0.026924854 = weight(_text_:retrieval in 3954) [ClassicSimilarity], result of:
          0.026924854 = score(doc=3954,freq=6.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.28943354 = fieldWeight in 3954, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3954)
        0.049705554 = weight(_text_:ranking in 3954) [ClassicSimilarity], result of:
          0.049705554 = score(doc=3954,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.29880944 = fieldWeight in 3954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3954)
      0.3 = coord(3/10)
    
    Abstract
    Die vorliegende Studie untersucht das Potenzial von Mehrwortbegriffen für das Information Retrieval. Zielsetzung der Arbeit ist es, intellektuell positiv bewertete Kandidaten mithilfe des Latent Semantic Analysis (LSA) Verfahren höher zu gewichten, als negativ bewertete Kandidaten. Die positiven Kandidaten sollen demnach bei einem Ranking im Information Retrieval bevorzugt werden. Als Kollektion wurde eine Version der sozialwissenschaftlichen GIRT-Datenbank (German Indexing and Retrieval Testdatabase) eingesetzt. Um Kandidaten für Mehrwortbegriffe zu identifizieren wurde die automatische Indexierung Lingo verwendet. Die notwendigen Kernfunktionalitäten waren Lemmatisierung, Identifizierung von Komposita, algorithmische Mehrworterkennung sowie Gewichtung von Indextermen durch das LSA-Modell. Die durch Lingo erkannten und LSAgewichteten Mehrwortkandidaten wurden evaluiert. Zuerst wurde dazu eine intellektuelle Auswahl von positiven und negativen Mehrwortkandidaten vorgenommen. Im zweiten Schritt der Evaluierung erfolgte die Berechnung der Ausbeute, um den Anteil der positiven Mehrwortkandidaten zu erhalten. Im letzten Schritt der Evaluierung wurde auf der Basis der R-Precision berechnet, wie viele positiv bewerteten Mehrwortkandidaten es an der Stelle k des Rankings geschafft haben. Die Ausbeute der positiven Mehrwortkandidaten lag bei durchschnittlich ca. 39%, während die R-Precision einen Durchschnittswert von 54% erzielte. Das LSA-Modell erzielt ein ambivalentes Ergebnis mit positiver Tendenz.
    Footnote
    Masterarbeit, Studiengang Informationswissenschaft und Sprachtechnologie, Institut für Sprache und Information, Philosophische Fakultät, Heinrich-Heine-Universität Düsseldorf
    Imprint
    Düsseldorf : Heinrich-Heine-Universität / Philosophische Fakultät / Institut für Sprache und Information
  12. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2013) 0.03
    0.0252996 = product of:
      0.084332 = sum of:
        0.010470974 = weight(_text_:information in 1810) [ClassicSimilarity], result of:
          0.010470974 = score(doc=1810,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.19395474 = fieldWeight in 1810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1810)
        0.0439681 = weight(_text_:retrieval in 1810) [ClassicSimilarity], result of:
          0.0439681 = score(doc=1810,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.47264296 = fieldWeight in 1810, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=1810)
        0.02989292 = product of:
          0.05978584 = sum of:
            0.05978584 = weight(_text_:evaluation in 1810) [ClassicSimilarity], result of:
              0.05978584 = score(doc=1810,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.4634533 = fieldWeight in 1810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1810)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Content
    Trägerin des VFI-Dissertationspreises 2014: "Überzeugende gründliche linguistische und quantitative Analyse eines im Information Retrieval bisher wenig beachteten Textelementes anhand eines eigens erstellten grossen Hypertextkorpus, einschliesslich der Evaluation selbsterstellter Auflösungsregeln für die Nutzung in künftigen IR-Systemen.".
  13. Haveliwala, T.: Context-Sensitive Web search (2005) 0.02
    0.02171573 = product of:
      0.072385766 = sum of:
        0.0110814385 = weight(_text_:information in 2567) [ClassicSimilarity], result of:
          0.0110814385 = score(doc=2567,freq=14.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.20526241 = fieldWeight in 2567, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2567)
        0.021539884 = weight(_text_:retrieval in 2567) [ClassicSimilarity], result of:
          0.021539884 = score(doc=2567,freq=6.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.23154683 = fieldWeight in 2567, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2567)
        0.03976444 = weight(_text_:ranking in 2567) [ClassicSimilarity], result of:
          0.03976444 = score(doc=2567,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.23904754 = fieldWeight in 2567, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=2567)
      0.3 = coord(3/10)
    
    Abstract
    As the Web continues to grow and encompass broader and more diverse sources of information, providing effective search facilities to users becomes an increasingly challenging problem. To help users deal with the deluge of Web-accessible information, we propose a search system which makes use of context to improve search results in a scalable way. By context, we mean any sources of information, in addition to any search query, that provide clues about the user's true information need. For instance, a user's bookmarks and search history can be considered a part of the search context. We consider two types of context-based search. The first type of functionality we consider is "similarity search." In this case, as the user is browsing Web pages, URLs for pages similar to the current page are retrieved and displayed in a side panel. No query is explicitly issued; context alone (i.e., the page currently being viewed) is used to provide the user with useful related information. The second type of functionality involves taking search context into account when ranking results to standard search queries. Web search differs from traditional information retrieval tasks in several major ways, making effective context-sensitive Web search challenging. First, scalability is of critical importance. With billions of publicly accessible documents, the Web is much larger than traditional datasets. Similarly, with millions of search queries issued each day, the query load is much higher than for traditional information retrieval systems. Second, there are no guarantees on the quality ofWeb pages, with Web-authors taking an adversarial, rather than cooperative, approach in attempts to inflate the rankings of their pages. Third, there is a significant amount of metadata embodied in the link structure corresponding to the hyperlinks between Web pages that can be exploitedduring the retrieval process. In this thesis, we design a search system, using the Stanford WebBase platform, that exploits the link structure of the Web to provide scalable, context-sensitive search.
  14. Mayr, P.: Re-Ranking auf Basis von Bradfordizing für die verteilte Suche in Digitalen Bibliotheken (2009) 0.02
    0.021165198 = product of:
      0.10582599 = sum of:
        0.088916 = weight(_text_:ranking in 4302) [ClassicSimilarity], result of:
          0.088916 = score(doc=4302,freq=10.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.5345266 = fieldWeight in 4302, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=4302)
        0.01690999 = product of:
          0.03381998 = sum of:
            0.03381998 = weight(_text_:evaluation in 4302) [ClassicSimilarity], result of:
              0.03381998 = score(doc=4302,freq=4.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.2621688 = fieldWeight in 4302, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4302)
          0.5 = coord(1/2)
      0.2 = coord(2/10)
    
    Abstract
    Trotz großer Dokumentmengen für datenbankübergreifende Literaturrecherchen erwarten akademische Nutzer einen möglichst hohen Anteil an relevanten und qualitativen Dokumenten in den Trefferergebnissen. Insbesondere die Reihenfolge und Struktur der gelisteten Ergebnisse (Ranking) spielt, neben dem direkten Volltextzugriff auf die Dokumente, inzwischen eine entscheidende Rolle beim Design von Suchsystemen. Nutzer erwarten weiterhin flexible Informationssysteme, die es unter anderem zulassen, Einfluss auf das Ranking der Dokumente zu nehmen bzw. alternative Rankingverfahren zu verwenden. In dieser Arbeit werden zwei Mehrwertverfahren für Suchsysteme vorgestellt, die die typischen Probleme bei der Recherche nach wissenschaftlicher Literatur behandeln und damit die Recherchesituation messbar verbessern können. Die beiden Mehrwertdienste semantische Heterogenitätsbehandlung am Beispiel Crosskonkordanzen und Re-Ranking auf Basis von Bradfordizing, die in unterschiedlichen Phasen der Suche zum Einsatz kommen, werden hier ausführlich beschrieben und im empirischen Teil der Arbeit bzgl. der Effektivität für typische fachbezogene Recherchen evaluiert. Vorrangiges Ziel der Promotion ist es, zu untersuchen, ob das hier vorgestellte alternative Re-Rankingverfahren Bradfordizing im Anwendungsbereich bibliographischer Datenbanken zum einen operabel ist und zum anderen voraussichtlich gewinnbringend in Informationssystemen eingesetzt und dem Nutzer angeboten werden kann. Für die Tests wurden Fragestellungen und Daten aus zwei Evaluationsprojekten (CLEF und KoMoHe) verwendet. Die intellektuell bewerteten Dokumente stammen aus insgesamt sieben wissenschaftlichen Fachdatenbanken der Fächer Sozialwissenschaften, Politikwissenschaft, Wirtschaftswissenschaften, Psychologie und Medizin. Die Evaluation der Crosskonkordanzen (insgesamt 82 Fragestellungen) zeigt, dass sich die Retrievalergebnisse signifikant für alle Crosskonkordanzen verbessern; es zeigt sich zudem, dass interdisziplinäre Crosskonkordanzen den stärksten (positiven) Effekt auf die Suchergebnisse haben. Die Evaluation des Re-Ranking nach Bradfordizing (insgesamt 164 Fragestellungen) zeigt, dass die Dokumente der Kernzone (Kernzeitschriften) für die meisten Testreihen eine signifikant höhere Precision als Dokumente der Zone 2 und Zone 3 (Peripheriezeitschriften) ergeben. Sowohl für Zeitschriften als auch für Monographien kann dieser Relevanzvorteil nach Bradfordizing auf einer sehr breiten Basis von Themen und Fragestellungen an zwei unabhängigen Dokumentkorpora empirisch nachgewiesen werden.
  15. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.02
    0.020342728 = product of:
      0.06780909 = sum of:
        0.0125651695 = weight(_text_:information in 3829) [ClassicSimilarity], result of:
          0.0125651695 = score(doc=3829,freq=8.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.23274569 = fieldWeight in 3829, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.03730817 = weight(_text_:retrieval in 3829) [ClassicSimilarity], result of:
          0.03730817 = score(doc=3829,freq=8.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.40105087 = fieldWeight in 3829, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.017935753 = product of:
          0.035871506 = sum of:
            0.035871506 = weight(_text_:evaluation in 3829) [ClassicSimilarity], result of:
              0.035871506 = score(doc=3829,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.278072 = fieldWeight in 3829, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
    Source
    Information Systems. 37(2012) no. 4, S.294-305
  16. Styltsvig, H.B.: Ontology-based information retrieval (2006) 0.02
    0.020089902 = product of:
      0.06696634 = sum of:
        0.0110814385 = weight(_text_:information in 1154) [ClassicSimilarity], result of:
          0.0110814385 = score(doc=1154,freq=14.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.20526241 = fieldWeight in 1154, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.03517448 = weight(_text_:retrieval in 1154) [ClassicSimilarity], result of:
          0.03517448 = score(doc=1154,freq=16.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.37811437 = fieldWeight in 1154, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.020710424 = product of:
          0.041420847 = sum of:
            0.041420847 = weight(_text_:evaluation in 1154) [ClassicSimilarity], result of:
              0.041420847 = score(doc=1154,freq=6.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.3210899 = fieldWeight in 1154, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1154)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Abstract
    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun phrases. Furthermore, we briefly cover the identification of semantic relations inside and between noun phrases, as well as discuss which kind of problems are caused by an increase in compoundness with respect to the structure of concepts in the evaluation of queries. Measuring similarity between concepts based on distances in the structure of the ontology is discussed. In addition, a shared nodes measure is introduced and, based on a set of intuitive similarity properties, compared to a number of different measures. In this comparison the shared nodes measure appears to be superior, though more computationally complex. Some of the major problems of shared nodes which relate to the way relations differ with respect to the degree they bring the concepts they connect closer are discussed. A generalized measure called weighted shared nodes is introduced to deal with these problems. Finally, the utilization of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems, it appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.
  17. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.02
    0.020047983 = product of:
      0.05011996 = sum of:
        0.0069258986 = weight(_text_:information in 4232) [ClassicSimilarity], result of:
          0.0069258986 = score(doc=4232,freq=14.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.128289 = fieldWeight in 4232, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.0077725356 = weight(_text_:retrieval in 4232) [ClassicSimilarity], result of:
          0.0077725356 = score(doc=4232,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.08355226 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.024852777 = weight(_text_:ranking in 4232) [ClassicSimilarity], result of:
          0.024852777 = score(doc=4232,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.14940472 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.010568744 = product of:
          0.021137487 = sum of:
            0.021137487 = weight(_text_:evaluation in 4232) [ClassicSimilarity], result of:
              0.021137487 = score(doc=4232,freq=4.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.1638555 = fieldWeight in 4232, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4232)
          0.5 = coord(1/2)
      0.4 = coord(4/10)
    
    Abstract
    After the launch of the World Wide Web, it became clear that searching documentson the Web would not be trivial. Well-known engines to search the web, like Google, focus on search in web documents using keywords. The documents are structured and indexed to ensure keywords match documents as accurately as possible. However, searching by keywords does not always suice. It is oen the case that users do not know exactly how to formulate the search query or which keywords guarantee retrieving the most relevant documents. Besides that, it occurs that users rather want to browse information than looking up something specific. It turned out that there is need for systems that enable more interactivity and facilitate the gradual refinement of search queries to explore the Web. Users expect more from the Web because the short keyword-based queries they pose during search, do not suffice for all cases. On top of that, the Web is changing structurally. The Web comprises, apart from a collection of documents, more and more linked data, pieces of information structured so they can be processed by machines. The consequently applied semantics allow users to exactly indicate machines their search intentions. This is made possible by describing data following controlled vocabularies, concept lists composed by experts, published uniquely identifiable on the Web. Even so, it is still not trivial to explore data on the Web. There is a large variety of vocabularies and various data sources use different terms to identify the same concepts.
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. eries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research.
    Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data. There is a difference in the way users interact with resources, visually or textually, and how resources are represented for machines to be processed by algorithms. This difference complicates bridging the users' intents and machine executable queries. It is important to implement this 'translation' mechanism to impact the search as favorable as possible in terms of performance, complexity and accuracy. To do this, we explain a second technique, that supports such a bridging component. Our second technique is developed around three features that support the search process: looking up, relating and ranking resources. The main goal is to ensure that resources in the results are as precise and relevant as possible. During the evaluation of this technique, we did not only look at the precision of the search results but also investigated how the effectiveness of the search evolved while the user executed certain actions sequentially.
    When we speak about finding relationships between resources, it is necessary to dive deeper in the structure. The graph structure of linked data where the semantics give meaning to the relationships between resources enable the execution of pathfinding algorithms. The assigned weights and heuristics are base components of such algorithms and ultimately define (the order) which resources are included in a path. These paths explain indirect connections between resources. Our third technique proposes an algorithm that optimizes the choice of resources in terms of serendipity. Some optimizations guard the consistence of candidate-paths where the coherence of consecutive connections is maximized to avoid trivial and too arbitrary paths. The implementation uses the A* algorithm, the de-facto reference when it comes to heuristically optimized minimal cost paths. The effectiveness of paths was measured based on common automatic metrics and surveys where the users could indicate their preference for paths, generated each time in a different way. Finally, all our techniques are applied to a use case about publications in digital libraries where they are aligned with information about scientific conferences and researchers. The application to this use case is a practical example because the different aspects of exploratory search come together. In fact, the techniques also evolved from the experiences when implementing the use case. Practical details about the semantic model are explained and the implementation of the search system is clarified module by module. The evaluation positions the result, a prototype of a tool to explore scientific publications, researchers and conferences next to some important alternatives.
  18. Roßmann, N.: Website-usability : Landtag NRW (2002) 0.02
    0.019398976 = product of:
      0.06466325 = sum of:
        0.00418839 = weight(_text_:information in 1076) [ClassicSimilarity], result of:
          0.00418839 = score(doc=1076,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.0775819 = fieldWeight in 1076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1076)
        0.03976444 = weight(_text_:ranking in 1076) [ClassicSimilarity], result of:
          0.03976444 = score(doc=1076,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.23904754 = fieldWeight in 1076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=1076)
        0.020710424 = product of:
          0.041420847 = sum of:
            0.041420847 = weight(_text_:evaluation in 1076) [ClassicSimilarity], result of:
              0.041420847 = score(doc=1076,freq=6.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.3210899 = fieldWeight in 1076, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1076)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Abstract
    Die Studie untersucht die Usability der Homepage des Landtags NordrheinWestfalen. Es wird analysiert, wie benutzerfreundlich die Website ist, ob sie effektiv genutzt werden kann und ob die Inhalte der Site den Erwartungen der Benutzer entsprechen. Drei Evaluationsmethoden finden Einsatz: Ein Thinking-Aloud-Test, eine heuristische Evaluation und eine Availability-Untersuchung. Der hier eingesetzte Thinking-Aloud-Test ist ein Benutzertest, bei dem zwanzig Laien und zwanzig Information Professionals zuerst ihre Erwartungen an die Site äußern und dann anhand der Homepage, unter Beobachtung, Aufgaben bearbeiten. Anschließend geben sie anhand eines Fragebogens eine Beurteilung ab. Die heuristische Evaluation ist eine expertenzentrierte Methode. Usability-Experten stellen Checklisten auf, anhand derer eine Homepage untersucht wird. In dieser Studie finden drei Heuristiken Anwendung. Die Availability-Untersuchung der homepageeigenen Suchmaschine ist eine bibliothekarische Evaluationsmethode, die anhand einer Suche nach Webseiten, von denen bekannt ist, dass ihr Angebot aktiv ist, die Qualität der Suchmaschine hinsichtlich der Erreichbarkeit der Dokumente bestimmt. Die drei Methoden ergänzen einander und decken in ihrer Variation einen großen Pool an Usability-Fehlern auf. Die Ergebnisse: Die Benutzer vermissen auf der Homepage besonders Informationen über den Landeshaushalt, eine Liste der Mitarbeiter mit ihren Zuständigkeiten, Hintergrundinformationen zu aktuellen landespolitischen Themen und Diskussionsforen als Kommunikationsmöglichkeit. Im Durchschnitt liegen die Klickhäufigkeiten für das Lösen einer Aufgabe über denen, die für den kürzesten Weg benötigt werden. Die Abbruch-Quote aufgrund einer nicht lösbar erscheinenden Aufgabe beträgt bei 40 Probanden à sechs Aufgaben 13,33% (32 Abbrüche). In der Abschlussbefragung äußern sich die Testpersonen größtenteils zufrieden. Die Ausnahme stellt der Bereich "Navigation" dar. 40% halten die benötigten Links für schwer auffindbar, 45% aller Probanden wissen nicht jederzeit, an welcher Stelle sie sich befinden.
    Auch die heuristische Evaluation deckt Navigationsfehler auf. So ist beispielsweise die ZurückFunktion des Browsers fehlprogrammiert. Der Zurück-Button muss stets zwei- statt einmal betätigt werden, wenn man möchte, dass sich nicht nur ein Frame "zurückbewegt". Außerdem werden die Hierarchieebenen nicht deutlich genug hervorgehoben. Bereits besuchte Links unterscheiden sich farblich nicht von solchen, die noch nicht angeklickt wurden. Viele Fotos dienen weniger der Verständlichkeit der Texte als ästhetischen Zwecken. Interne Abkürzungen werden nicht immer aufgelöst und sind somit für den Benutzer unverständlich. Die Suchmaschine erreicht eine Availability von nur 60%, was auf den großen Anteil der Texte der Landtagszeitschrift in der Datenbasis und die Berücksichtigung von Worthäufigkeiten beim Relevance Ranking zurückzuführen ist. Die Kritikpunkte und Änderungsvorschläge beziehen sich nicht so sehr auf die Gesamtstruktur der Homepage, sondern auf viele Details. Auch wenn es scheint, als handele es sich bei einigen Punkten nur um Geringfügigkeiten, so wird die Umsetzung dieser Details, festgehalten im Usability-Report, für die Benutzerfreundlichkeit ein großer Gewinn werden.
  19. Schneider, A.: Moderne Retrievalverfahren in klassischen bibliotheksbezogenen Anwendungen : Projekte und Perspektiven (2008) 0.02
    0.016782397 = product of:
      0.05594132 = sum of:
        0.0110814385 = weight(_text_:information in 4031) [ClassicSimilarity], result of:
          0.0110814385 = score(doc=4031,freq=14.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.20526241 = fieldWeight in 4031, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4031)
        0.032902714 = weight(_text_:retrieval in 4031) [ClassicSimilarity], result of:
          0.032902714 = score(doc=4031,freq=14.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.3536936 = fieldWeight in 4031, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=4031)
        0.011957168 = product of:
          0.023914335 = sum of:
            0.023914335 = weight(_text_:evaluation in 4031) [ClassicSimilarity], result of:
              0.023914335 = score(doc=4031,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.18538132 = fieldWeight in 4031, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4031)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Abstract
    Die vorliegende Arbeit beschäftigt sich mit modernen Retrievalverfahren in klassischen bibliotheksbezogenen Anwendungen. Wie die Verbindung der beiden gegensätzlich scheinenden Wortgruppen im Titel zeigt, werden in der Arbeit Aspekte aus der Informatik bzw. Informationswissenschaft mit Aspekten aus der Bibliothekstradition verknüpft. Nach einer kurzen Schilderung der Ausgangslage, der so genannten Informationsflut, im ersten Kapitel stellt das zweite Kapitel eine Einführung in die Theorie des Information Retrieval dar. Im Einzelnen geht es um die Grundlagen von Information Retrieval und Information-Retrieval-Systemen sowie um die verschiedenen Möglichkeiten der Informationserschließung. Hier werden Formal- und Sacherschließung, Indexierung und automatische Indexierung behandelt. Des Weiteren werden im Rahmen der Theorie des Information Retrieval unterschiedliche Information-Retrieval-Modelle und die Evaluation durch Retrievaltests vorgestellt. Nach der Theorie folgt im dritten Kapitel die Praxis des Information Retrieval. Es werden die organisationsinterne Anwendung, die Anwendung im Informations- und Dokumentationsbereich sowie die Anwendung im Bibliotheksbereich unterschieden. Die organisationsinterne Anwendung wird durch das Beispiel der Datenbank KURS zur Aus- und Weiterbildung veranschaulicht. Die Anwendung im Bibliotheksbereich bezieht sich in erster Linie auf den OPAC als Kompromiss zwischen bibliothekarischer Indexierung und Endnutzeranforderungen und auf seine Anreicherung (sog. Catalogue Enrichment), um das Retrieval zu verbessern. Der Bibliotheksbereich wird ausführlicher behandelt, indem ein Rückblick auf abgeschlossene Projekte zu Informations- und Indexierungssystemen aus den Neunziger Jahren (OSIRIS, MILOS I und II, KASCADE) sowie ein Einblick in aktuelle Projekte gegeben werden. In den beiden folgenden Kapiteln wird je ein aktuelles Projekt zur Verbesserung des Retrievals durch Kataloganreicherung, automatische Erschließung und fortschrittliche Retrievalverfahren präsentiert: das Suchportal dandelon.com und das 180T-Projekt des Hochschulbibliothekszentrums des Landes Nordrhein-Westfalen. Hierbei werden jeweils Projektziel, Projektpartner, Projektorganisation, Projektverlauf und die verwendete Technologie vorgestellt. Die Projekte unterscheiden sich insofern, dass in dem einen Fall eine große Verbundzentrale die Projektkoordination übernimmt, im anderen Fall jede einzelne teilnehmende Bibliothek selbst für die Durchführung verantwortlich ist. Im sechsten und letzten Kapitel geht es um das Fazit und die Perspektiven. Es werden sowohl die beiden beschriebenen Projekte bewertet als auch ein Ausblick auf Entwicklungen bezüglich des Bibliothekskatalogs gegeben. Diese Veröffentlichung geht zurück auf eine Master-Arbeit im postgradualen Fernstudiengang Master of Arts (Library and Information Science) an der Humboldt-Universität zu Berlin.
  20. Witschel, H.F.: Global and local resources for peer-to-peer text retrieval (2008) 0.02
    0.015576481 = product of:
      0.051921602 = sum of:
        0.006347691 = weight(_text_:information in 127) [ClassicSimilarity], result of:
          0.006347691 = score(doc=127,freq=6.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.11757882 = fieldWeight in 127, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=127)
        0.03077767 = weight(_text_:retrieval in 127) [ClassicSimilarity], result of:
          0.03077767 = score(doc=127,freq=16.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.33085006 = fieldWeight in 127, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=127)
        0.014796241 = product of:
          0.029592482 = sum of:
            0.029592482 = weight(_text_:evaluation in 127) [ClassicSimilarity], result of:
              0.029592482 = score(doc=127,freq=4.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.2293977 = fieldWeight in 127, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=127)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Abstract
    This thesis is organised as follows: Chapter 2 gives a general introduction to the field of information retrieval, covering its most important aspects. Further, the tasks of distributed and peer-to-peer information retrieval (P2PIR) are introduced, motivating their application and characterising the special challenges that they involve, including a review of existing architectures and search protocols in P2PIR. Finally, chapter 2 presents approaches to evaluating the e ectiveness of both traditional and peer-to-peer IR systems. Chapter 3 contains a detailed account of state-of-the-art information retrieval models and algorithms. This encompasses models for matching queries against document representations, term weighting algorithms, approaches to feedback and associative retrieval as well as distributed retrieval. It thus defines important terminology for the following chapters. The notion of "multi-level association graphs" (MLAGs) is introduced in chapter 4. An MLAG is a simple, graph-based framework that allows to model most of the theoretical and practical approaches to IR presented in chapter 3. Moreover, it provides an easy-to-grasp way of defining and including new entities into IR modeling, such as paragraphs or peers, dividing them conceptually while at the same time connecting them to each other in a meaningful way. This allows for a unified view on many IR tasks, including that of distributed and peer-to-peer search. Starting from related work and a formal defiition of the framework, the possibilities of modeling that it provides are discussed in detail, followed by an experimental section that shows how new insights gained from modeling inside the framework can lead to novel combinations of principles and eventually to improved retrieval effectiveness.
    Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. What is examined, is the question of how we can obtain such global statistics and to what extent their use will lead to a drop in retrieval effectiveness. In chapter 6, the second research question is tackled, namely that of making forwarding decisions for queries, based on profiles of other peers. After a review of related work in that area, the chapter first defines the approaches that will be compared against each other. Then, a novel evaluation framework is introduced, including a new measure for comparing results of a distributed search engine against those of a centralised one. Finally, the actual evaluation is performed using the new framework.

Languages

  • d 229
  • e 38
  • f 2
  • a 1
  • hu 1
  • pt 1
  • More… Less…

Types