Search (70 results, page 1 of 4)

  • × type_ss:"x"
  • × year_i:[2000 TO 2010}
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.29
    0.2931133 = product of:
      0.3908177 = sum of:
        0.04372453 = product of:
          0.13117358 = sum of:
            0.13117358 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13117358 = score(doc=701,freq=2.0), product of:
                0.35009617 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041294612 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.062481117 = weight(_text_:retrieval in 701) [ClassicSimilarity], result of:
          0.062481117 = score(doc=701,freq=28.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.5001983 = fieldWeight in 701, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.13117358 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.13117358 = score(doc=701,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.017850775 = weight(_text_:of in 701) [ClassicSimilarity], result of:
          0.017850775 = score(doc=701,freq=32.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.27643585 = fieldWeight in 701, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.13117358 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.13117358 = score(doc=701,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.004414106 = product of:
          0.008828212 = sum of:
            0.008828212 = weight(_text_:on in 701) [ClassicSimilarity], result of:
              0.008828212 = score(doc=701,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.097201325 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.75 = coord(6/8)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Styltsvig, H.B.: Ontology-based information retrieval (2006) 0.05
    0.050694928 = product of:
      0.101389855 = sum of:
        0.047231287 = weight(_text_:retrieval in 1154) [ClassicSimilarity], result of:
          0.047231287 = score(doc=1154,freq=16.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.37811437 = fieldWeight in 1154, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.024199642 = weight(_text_:use in 1154) [ClassicSimilarity], result of:
          0.024199642 = score(doc=1154,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.19138055 = fieldWeight in 1154, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.02231347 = weight(_text_:of in 1154) [ClassicSimilarity], result of:
          0.02231347 = score(doc=1154,freq=50.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.34554482 = fieldWeight in 1154, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.007645456 = product of:
          0.015290912 = sum of:
            0.015290912 = weight(_text_:on in 1154) [ClassicSimilarity], result of:
              0.015290912 = score(doc=1154,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.16835764 = fieldWeight in 1154, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1154)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun phrases. Furthermore, we briefly cover the identification of semantic relations inside and between noun phrases, as well as discuss which kind of problems are caused by an increase in compoundness with respect to the structure of concepts in the evaluation of queries. Measuring similarity between concepts based on distances in the structure of the ontology is discussed. In addition, a shared nodes measure is introduced and, based on a set of intuitive similarity properties, compared to a number of different measures. In this comparison the shared nodes measure appears to be superior, though more computationally complex. Some of the major problems of shared nodes which relate to the way relations differ with respect to the degree they bring the concepts they connect closer are discussed. A generalized measure called weighted shared nodes is introduced to deal with these problems. Finally, the utilization of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems, it appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.
    Content
    A dissertation Presented to the Faculties of Roskilde University in Partial Fulfillment of the Requirement for the Degree of Doctor of Philosophy. Vgl. unter: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.117.987 oder http://coitweb.uncc.edu/~ras/RS/Onto-Retrieval.pdf.
  3. Slavic-Overfield, A.: Classification management and use in a networked environment : the case of the Universal Decimal Classification (2005) 0.05
    0.049702916 = product of:
      0.09940583 = sum of:
        0.033397563 = weight(_text_:retrieval in 2191) [ClassicSimilarity], result of:
          0.033397563 = score(doc=2191,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.26736724 = fieldWeight in 2191, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
        0.04191501 = weight(_text_:use in 2191) [ClassicSimilarity], result of:
          0.04191501 = score(doc=2191,freq=12.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.33148083 = fieldWeight in 2191, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
        0.017850775 = weight(_text_:of in 2191) [ClassicSimilarity], result of:
          0.017850775 = score(doc=2191,freq=32.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.27643585 = fieldWeight in 2191, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
        0.0062424885 = product of:
          0.012484977 = sum of:
            0.012484977 = weight(_text_:on in 2191) [ClassicSimilarity], result of:
              0.012484977 = score(doc=2191,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.13746344 = fieldWeight in 2191, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2191)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    In the Internet information space, advanced information retrieval (IR) methods and automatic text processing are used in conjunction with traditional knowledge organization systems (KOS). New information technology provides a platform for better KOS publishing, exploitation and sharing both for human and machine use. Networked KOS services are now being planned and developed as powerful tools for resource discovery. They will enable automatic contextualisation, interpretation and query matching to different indexing languages. The Semantic Web promises to be an environment in which the quality of semantic relationships in bibliographic classification systems can be fully exploited. Their use in the networked environment is, however, limited by the fact that they are not prepared or made available for advanced machine processing. The UDC was chosen for this research because of its widespread use and its long-term presence in online information retrieval systems. It was also the first system to be used for the automatic classification of Internet resources, and the first to be made available as a classification tool on the Web. The objective of this research is to establish the advantages of using UDC for information retrieval in a networked environment, to highlight the problems of automation and classification exchange, and to offer possible solutions. The first research question was is there enough evidence of the use of classification on the Internet to justify further development with this particular environment in mind? The second question is what are the automation requirements for the full exploitation of UDC and its exchange? The third question is which areas are in need of improvement and what specific recommendations can be made for implementing the UDC in a networked environment? A summary of changes required in the management and development of the UDC to facilitate its full adaptation for future use is drawn from this analysis.
    Content
    Thesis submitted for the Degree of Doctor of Philosophy at the University of London
    Theme
    Klassifikationssysteme im Online-Retrieval
  4. Eckert, K.: Thesaurus analysis and visualization in semantic search applications (2007) 0.04
    0.041035485 = product of:
      0.08207097 = sum of:
        0.029519552 = weight(_text_:retrieval in 3222) [ClassicSimilarity], result of:
          0.029519552 = score(doc=3222,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23632148 = fieldWeight in 3222, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
        0.021389665 = weight(_text_:use in 3222) [ClassicSimilarity], result of:
          0.021389665 = score(doc=3222,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.1691581 = fieldWeight in 3222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
        0.021604925 = weight(_text_:of in 3222) [ClassicSimilarity], result of:
          0.021604925 = score(doc=3222,freq=30.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.33457235 = fieldWeight in 3222, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
        0.00955682 = product of:
          0.01911364 = sum of:
            0.01911364 = weight(_text_:on in 3222) [ClassicSimilarity], result of:
              0.01911364 = score(doc=3222,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.21044704 = fieldWeight in 3222, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3222)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    The use of thesaurus-based indexing is a common approach for increasing the performance of information retrieval. In this thesis, we examine the suitability of a thesaurus for a given set of information and evaluate improvements of existing thesauri to get better search results. On this area, we focus on two aspects: 1. We demonstrate an analysis of the indexing results achieved by an automatic document indexer and the involved thesaurus. 2. We propose a method for thesaurus evaluation which is based on a combination of statistical measures and appropriate visualization techniques that support the detection of potential problems in a thesaurus. In this chapter, we give an overview of the context of our work. Next, we briefly outline the basics of thesaurus-based information retrieval and describe the Collexis Engine that was used for our experiments. In Chapter 3, we describe two experiments in automatically indexing documents in the areas of medicine and economics with corresponding thesauri and compare the results to available manual annotations. Chapter 4 describes methods for assessing thesauri and visualizing the result in terms of a treemap. We depict examples of interesting observations supported by the method and show that we actually find critical problems. We conclude with a discussion of open questions and future research in Chapter 5.
  5. Witschel, H.F.: Global and local resources for peer-to-peer text retrieval (2008) 0.04
    0.041026272 = product of:
      0.082052544 = sum of:
        0.041327372 = weight(_text_:retrieval in 127) [ClassicSimilarity], result of:
          0.041327372 = score(doc=127,freq=16.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.33085006 = fieldWeight in 127, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=127)
        0.014972764 = weight(_text_:use in 127) [ClassicSimilarity], result of:
          0.014972764 = score(doc=127,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.11841066 = fieldWeight in 127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.02734375 = fieldNorm(doc=127)
        0.020290231 = weight(_text_:of in 127) [ClassicSimilarity], result of:
          0.020290231 = score(doc=127,freq=54.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.3142131 = fieldWeight in 127, product of:
              7.3484693 = tf(freq=54.0), with freq of:
                54.0 = termFreq=54.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=127)
        0.0054621776 = product of:
          0.010924355 = sum of:
            0.010924355 = weight(_text_:on in 127) [ClassicSimilarity], result of:
              0.010924355 = score(doc=127,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.120280504 = fieldWeight in 127, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=127)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    This thesis is organised as follows: Chapter 2 gives a general introduction to the field of information retrieval, covering its most important aspects. Further, the tasks of distributed and peer-to-peer information retrieval (P2PIR) are introduced, motivating their application and characterising the special challenges that they involve, including a review of existing architectures and search protocols in P2PIR. Finally, chapter 2 presents approaches to evaluating the e ectiveness of both traditional and peer-to-peer IR systems. Chapter 3 contains a detailed account of state-of-the-art information retrieval models and algorithms. This encompasses models for matching queries against document representations, term weighting algorithms, approaches to feedback and associative retrieval as well as distributed retrieval. It thus defines important terminology for the following chapters. The notion of "multi-level association graphs" (MLAGs) is introduced in chapter 4. An MLAG is a simple, graph-based framework that allows to model most of the theoretical and practical approaches to IR presented in chapter 3. Moreover, it provides an easy-to-grasp way of defining and including new entities into IR modeling, such as paragraphs or peers, dividing them conceptually while at the same time connecting them to each other in a meaningful way. This allows for a unified view on many IR tasks, including that of distributed and peer-to-peer search. Starting from related work and a formal defiition of the framework, the possibilities of modeling that it provides are discussed in detail, followed by an experimental section that shows how new insights gained from modeling inside the framework can lead to novel combinations of principles and eventually to improved retrieval effectiveness.
    Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. What is examined, is the question of how we can obtain such global statistics and to what extent their use will lead to a drop in retrieval effectiveness. In chapter 6, the second research question is tackled, namely that of making forwarding decisions for queries, based on profiles of other peers. After a review of related work in that area, the chapter first defines the approaches that will be compared against each other. Then, a novel evaluation framework is introduced, including a new measure for comparing results of a distributed search engine against those of a centralised one. Finally, the actual evaluation is performed using the new framework.
  6. Kirk, J.: Theorising information use : managers and their work (2002) 0.04
    0.037720058 = product of:
      0.100586824 = sum of:
        0.066960245 = weight(_text_:use in 560) [ClassicSimilarity], result of:
          0.066960245 = score(doc=560,freq=10.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.52954865 = fieldWeight in 560, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=560)
        0.025901893 = weight(_text_:of in 560) [ClassicSimilarity], result of:
          0.025901893 = score(doc=560,freq=22.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.40111488 = fieldWeight in 560, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=560)
        0.007724685 = product of:
          0.01544937 = sum of:
            0.01544937 = weight(_text_:on in 560) [ClassicSimilarity], result of:
              0.01544937 = score(doc=560,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.17010231 = fieldWeight in 560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=560)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    The focus of this thesis is information use. Although a key concept in information behaviour, information use has received little attention from information science researchers. Studies of other key concepts such as information need and information seeking are dominant in information behaviour research. Information use is an area of interest to information professionals who rely on research outcomes to shape their practice. There are few empirical studies of how people actually use information that might guide and refine the development of information systems, products and services.
    Content
    A thesis submitted to the University of Technology, Sydney in fulfilment of the requirements for the degree of Doctor of Philosophy. - Vgl. unter: http://epress.lib.uts.edu.au/dspace/bitstream/2100/309/2/02whole.pdf.
    Imprint
    Sydney : University of Technology / Faculty of Humanities and Social Sciences
  7. Francu, V.: Multilingual access to information using an intermediate language (2003) 0.04
    0.036819052 = product of:
      0.073638104 = sum of:
        0.028923139 = weight(_text_:retrieval in 1742) [ClassicSimilarity], result of:
          0.028923139 = score(doc=1742,freq=6.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23154683 = fieldWeight in 1742, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1742)
        0.01711173 = weight(_text_:use in 1742) [ClassicSimilarity], result of:
          0.01711173 = score(doc=1742,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.13532647 = fieldWeight in 1742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=1742)
        0.019957775 = weight(_text_:of in 1742) [ClassicSimilarity], result of:
          0.019957775 = score(doc=1742,freq=40.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.3090647 = fieldWeight in 1742, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1742)
        0.007645456 = product of:
          0.015290912 = sum of:
            0.015290912 = weight(_text_:on in 1742) [ClassicSimilarity], result of:
              0.015290912 = score(doc=1742,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.16835764 = fieldWeight in 1742, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1742)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    While being theoretically so widely available, information can be restricted from a more general use by linguistic barriers. The linguistic aspects of the information languages and particularly the chances of an enhanced access to information by means of multilingual access facilities will make the substance of this thesis. The main problem of this research is thus to demonstrate that information retrieval can be improved by using multilingual thesaurus terms based on an intermediate or switching language to search with. Universal classification systems in general can play the role of switching languages for reasons dealt with in the forthcoming pages. The Universal Decimal Classification (UDC) in particular is the classification system used as example of a switching language for our objectives. The question may arise: why a universal classification system and not another thesaurus? Because the UDC like most of the classification systems uses symbols. Therefore, it is language independent and the problems of compatibility between such a thesaurus and different other thesauri in different languages are avoided. Another question may still arise? Why not then, assign running numbers to the descriptors in a thesaurus and make a switching language out of the resulting enumerative system? Because of some other characteristics of the UDC: hierarchical structure and terminological richness, consistency and control. One big problem to find an answer to is: can a thesaurus be made having as a basis a classification system in any and all its parts? To what extent this question can be given an affirmative answer? This depends much on the attributes of the universal classification system which can be favourably used to this purpose. Examples of different situations will be given and discussed upon beginning with those classes of UDC which are best fitted for building a thesaurus structure out of them (classes which are both hierarchical and faceted)...
    Content
    Inhalt: INFORMATION LANGUAGES: A LINGUISTIC APPROACH MULTILINGUAL ASPECTS IN INFORMATION STORAGE AND RETRIEVAL COMPATIBILITY AND CONVERTIBILITY OF INFORMATION LANGUAGES CURRENT TRENDS IN MULTILINGUAL ACCESS BUILDING UDC-BASED MULTILINGUAL THESAURI ONLINE APPLICATIONS OF THE UDC-BASED MULTILINGUAL THESAURI THE IMPACT OF SPECIFICITY ON THE RETRIEVAL POWER OF A UDC-BASED MULTILINGUAL THESAURUS FINAL REMARKS AND GENERAL CONCLUSIONS Proefschrift voorgelegd tot het behalen van de graad van doctor in de Taal- en Letterkunde aan de Universiteit Antwerpen. - Vgl.: http://dlist.sir.arizona.edu/1862/.
  8. Haveliwala, T.: Context-Sensitive Web search (2005) 0.03
    0.033573423 = product of:
      0.067146845 = sum of:
        0.028923139 = weight(_text_:retrieval in 2567) [ClassicSimilarity], result of:
          0.028923139 = score(doc=2567,freq=6.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23154683 = fieldWeight in 2567, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2567)
        0.01711173 = weight(_text_:use in 2567) [ClassicSimilarity], result of:
          0.01711173 = score(doc=2567,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.13532647 = fieldWeight in 2567, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=2567)
        0.01669787 = weight(_text_:of in 2567) [ClassicSimilarity], result of:
          0.01669787 = score(doc=2567,freq=28.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.25858206 = fieldWeight in 2567, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2567)
        0.004414106 = product of:
          0.008828212 = sum of:
            0.008828212 = weight(_text_:on in 2567) [ClassicSimilarity], result of:
              0.008828212 = score(doc=2567,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.097201325 = fieldWeight in 2567, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2567)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    As the Web continues to grow and encompass broader and more diverse sources of information, providing effective search facilities to users becomes an increasingly challenging problem. To help users deal with the deluge of Web-accessible information, we propose a search system which makes use of context to improve search results in a scalable way. By context, we mean any sources of information, in addition to any search query, that provide clues about the user's true information need. For instance, a user's bookmarks and search history can be considered a part of the search context. We consider two types of context-based search. The first type of functionality we consider is "similarity search." In this case, as the user is browsing Web pages, URLs for pages similar to the current page are retrieved and displayed in a side panel. No query is explicitly issued; context alone (i.e., the page currently being viewed) is used to provide the user with useful related information. The second type of functionality involves taking search context into account when ranking results to standard search queries. Web search differs from traditional information retrieval tasks in several major ways, making effective context-sensitive Web search challenging. First, scalability is of critical importance. With billions of publicly accessible documents, the Web is much larger than traditional datasets. Similarly, with millions of search queries issued each day, the query load is much higher than for traditional information retrieval systems. Second, there are no guarantees on the quality ofWeb pages, with Web-authors taking an adversarial, rather than cooperative, approach in attempts to inflate the rankings of their pages. Third, there is a significant amount of metadata embodied in the link structure corresponding to the hyperlinks between Web pages that can be exploitedduring the retrieval process. In this thesis, we design a search system, using the Stanford WebBase platform, that exploits the link structure of the Web to provide scalable, context-sensitive search.
  9. Mao, M.: Ontology mapping : towards semantic interoperability in distributed and heterogeneous environments (2008) 0.03
    0.030194785 = product of:
      0.06038957 = sum of:
        0.016698781 = weight(_text_:retrieval in 4659) [ClassicSimilarity], result of:
          0.016698781 = score(doc=4659,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.13368362 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.01711173 = weight(_text_:use in 4659) [ClassicSimilarity], result of:
          0.01711173 = score(doc=4659,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.13532647 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.018933605 = weight(_text_:of in 4659) [ClassicSimilarity], result of:
          0.018933605 = score(doc=4659,freq=36.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2932045 = fieldWeight in 4659, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.007645456 = product of:
          0.015290912 = sum of:
            0.015290912 = weight(_text_:on in 4659) [ClassicSimilarity], result of:
              0.015290912 = score(doc=4659,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.16835764 = fieldWeight in 4659, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4659)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    This dissertation studies ontology mapping: the problem of finding semantic correspondences between similar elements of different ontologies. In the dissertation, elements denote classes or properties of ontologies. The goal of this research is to use ontology mapping to make heterogeneous information more accessible. The World Wide Web (WWW) now is widely used as a universal medium for information exchange. Semantic interoperability among different information systems in the WWW is limited due to information heterogeneity, and the non semantic nature of HTML and URLs. Ontologies have been suggested as a way to solve the problem of information heterogeneity by providing formal, explicit definitions of data and reasoning ability over related concepts. Given that no universal ontology exists for the WWW, work has focused on finding semantic correspondences between similar elements of different ontologies, i.e., ontology mapping. Ontology mapping can be done either by hand or using automated tools. Manual mapping becomes impractical as the size and complexity of ontologies increases. Full or semi-automated mapping approaches have been examined by several research studies. Previous full or semiautomated mapping approaches include analyzing linguistic information of elements in ontologies, treating ontologies as structural graphs, applying heuristic rules and machine learning techniques, and using probabilistic and reasoning methods etc. In this paper, two generic ontology mapping approaches are proposed. One is the PRIOR+ approach, which utilizes both information retrieval and artificial intelligence techniques in the context of ontology mapping. The other is the non-instance learning based approach, which experimentally explores machine learning algorithms to solve ontology mapping problem without requesting any instance. The results of the PRIOR+ on different tests at OAEI ontology matching campaign 2007 are encouraging. The non-instance learning based approach has shown potential for solving ontology mapping problem on OAEI benchmark tests.
    Content
    Submitted to the Graduate Faculty of School of Information Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
    Imprint
    Pittsburgh : University of Pittsburgh
  10. Markó, K.G.: Foundation, implementation and evaluation of the MorphoSaurus system (2008) 0.02
    0.022112232 = product of:
      0.05896595 = sum of:
        0.029222867 = weight(_text_:retrieval in 4415) [ClassicSimilarity], result of:
          0.029222867 = score(doc=4415,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23394634 = fieldWeight in 4415, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4415)
        0.019524286 = weight(_text_:of in 4415) [ClassicSimilarity], result of:
          0.019524286 = score(doc=4415,freq=50.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.3023517 = fieldWeight in 4415, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4415)
        0.010218799 = product of:
          0.020437598 = sum of:
            0.020437598 = weight(_text_:on in 4415) [ClassicSimilarity], result of:
              0.020437598 = score(doc=4415,freq=14.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.22502424 = fieldWeight in 4415, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4415)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    This work proposes an approach which is intended to meet the particular challenges of Medical Language Processing, in particular medical information retrieval. At its core lies a new type of dictionary, in which the entries are equivalence classes of subwords, i.e., semantically minimal units. These equivalence classes capture intralingual as well as interlingual synonymy. As equivalence classes abstract away from subtle particularities within and between languages and reference to them is realized via a language-independent conceptual system, they form an interlingua. In this work, the theoretical foundations of this approach are elaborated on. Furthermore, design considerations of applications based on the subword methodology are drawn up and showcase implementations are evaluated in detail. Starting with the introduction of Medical Linguistics as a field of active research in Chapter two, its consideration as a domain separated form general linguistics is motivated. In particular, morphological phenomena inherent to medical language are figured in more detail, which leads to an alternative view on medical terms and the introduction of the notion of subwords. Chapter three describes the formal foundation of subwords and the underlying linguistic declarative as well as procedural knowledge. An implementation of the subword model for the medical domain, the MorphoSaurus system, is presented in Chapter four. Emphasis will be given on the multilingual aspect of the proposed approach, including English, German, and Portuguese. The automatic acquisition of (medical) subwords for other languages (Spanish, French, and Swedish), and their integration in already available resources is described in the fifth Chapter.
    The proper handling of acronyms plays a crucial role in medical texts, e.g. in patient records, as well as in scientific literature. Chapter six presents an approach, in which acronyms are automatically acquired from (bio-) medical literature. Furthermore, acronyms and their definitions in different languages are linked to each other using the MorphoSaurus text processing system. Automatic word sense disambiguation is still one of the most challenging tasks in Natural Language Processing. In Chapter seven, cross-lingual considerations lead to a new methodology for automatic disambiguation applied to subwords. Beginning with Chapter eight, a series of applications based onMorphoSaurus are introduced. Firstly, the implementation of the subword approach within a crosslanguage information retrieval setting for the medical domain is described and evaluated on standard test document collections. In Chapter nine, this methodology is extended to multilingual information retrieval in the Web, for which user queries are translated into target languages based on the segmentation into subwords and their interlingual mappings. The cross-lingual, automatic assignment of document descriptors to documents is the topic of Chapter ten. A large-scale evaluation of a heuristic, as well as a statistical algorithm is carried out using a prominent medical thesaurus as a controlled vocabulary. In Chapter eleven, it will be shown how MorphoSaurus can be used to map monolingual, lexical resources across different languages. As a result, a large multilingual medical lexicon with high coverage and complete lexical information is built and evaluated against a comparable, already available and commonly used lexical repository for the medical domain. Chapter twelve sketches a few applications based on MorphoSaurus. The generality and applicability of the subword approach to other domains is outlined, and proof-of-concepts in real-world scenarios are presented. Finally, Chapter thirteen recapitulates the most important aspects of MorphoSaurus and the potential benefit of its employment in medical information systems is carefully assessed, both for medical experts in their everyday life, but also with regard to health care consumers and their existential information needs.
    Source
    Subword indexing, lexical learning and word sense disambiguation for medical crosslanguage information retrieval
  11. Tzitzikas, Y.: Collaborative ontology-based information indexing and retrieval (2002) 0.02
    0.016992632 = product of:
      0.045313686 = sum of:
        0.023615643 = weight(_text_:retrieval in 2281) [ClassicSimilarity], result of:
          0.023615643 = score(doc=2281,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.18905719 = fieldWeight in 2281, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.01728394 = weight(_text_:of in 2281) [ClassicSimilarity], result of:
          0.01728394 = score(doc=2281,freq=30.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.26765788 = fieldWeight in 2281, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.004414106 = product of:
          0.008828212 = sum of:
            0.008828212 = weight(_text_:on in 2281) [ClassicSimilarity], result of:
              0.008828212 = score(doc=2281,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.097201325 = fieldWeight in 2281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2281)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    An information system like the Web is a continuously evolving system consisting of multiple heterogeneous information sources, covering a wide domain of discourse, and a huge number of users (human or software) with diverse characteristics and needs, that produce and consume information. The challenge nowadays is to build a scalable information infrastructure enabling the effective, accurate, content based retrieval of information, in a way that adapts to the characteristics and interests of the users. The aim of this work is to propose formally sound methods for building such an information network based on ontologies which are widely used and are easy to grasp by ordinary Web users. The main results of this work are: - A novel scheme for indexing and retrieving objects according to multiple aspects or facets. The proposed scheme is a faceted scheme enriched with a method for specifying the combinations of terms that are valid. We give a model-theoretic interpretation to this model and we provide mechanisms for inferring the valid combinations of terms. This inference service can be exploited for preventing errors during the indexing process, which is very important especially in the case where the indexing is done collaboratively by many users, and for deriving "complete" navigation trees suitable for browsing through the Web. The proposed scheme has several advantages over the hierarchical classification schemes currently employed by Web catalogs, namely, conceptual clarity (it is easier to understand), compactness (it takes less space), and scalability (the update operations can be formulated more easily and be performed more effciently). - A exible and effecient model for building mediators over ontology based information sources. The proposed mediators support several modes of query translation and evaluation which can accommodate various application needs and levels of answer quality. The proposed model can be used for providing users with customized views of Web catalogs. It can also complement the techniques for building mediators over relational sources so as to support approximate translation of partially ordered domain values.
    Imprint
    Heraklion : University of Crete / Department of Computer Science
  12. Makewita, S.M.: Investigating the generic information-seeking function of organisational decision-makers : perspectives on improving organisational information systems (2002) 0.02
    0.016930826 = product of:
      0.045148868 = sum of:
        0.021604925 = weight(_text_:of in 642) [ClassicSimilarity], result of:
          0.021604925 = score(doc=642,freq=30.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.33457235 = fieldWeight in 642, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=642)
        0.00955682 = product of:
          0.01911364 = sum of:
            0.01911364 = weight(_text_:on in 642) [ClassicSimilarity], result of:
              0.01911364 = score(doc=642,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.21044704 = fieldWeight in 642, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=642)
          0.5 = coord(1/2)
        0.013987125 = product of:
          0.02797425 = sum of:
            0.02797425 = weight(_text_:22 in 642) [ClassicSimilarity], result of:
              0.02797425 = score(doc=642,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19345059 = fieldWeight in 642, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=642)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    The past decade has seen the emergence of a new paradigm in the corporate world where organisations emphasised connectivity as a means of exposing decision-makers to wider resources of information within and outside the organisation. Many organisations followed the initiatives of enhancing infrastructures, manipulating cultural shifts and emphasising managerial commitment for creating pools and networks of knowledge. However, the concept of connectivity is not merely presenting people with the data, but more importantly, to create environments where people can seek information efficiently. This paradigm has therefore caused a shift in the function of information systems in organisations. They have to be now assessed in relation to how they underpin people's information-seeking activities within the context of their organisational environment. This research project used interpretative research methods to investigate the nature of people's information-seeking activities at two culturally contrasting organisations. Outcomes of this research project provide insights into phenomena associated with people's information-seeking function, and show how they depend on the organisational context that is defined partly by information systems. It suggests that information-seeking is not just searching for data. The inefficiencies inherent in both people and their environments can bring opaqueness into people's data, which they need to avoid or eliminate as part of seeking information. This seems to have made information-seeking a two-tier process consisting of a primary process of searching and interpreting data and auxiliary process of avoiding and eliminating opaqueness in data. Based on this view, this research suggests that organisational information systems operate naturally as implicit dual-mechanisms to underpin the above two-tier process, and that improvements to information systems should concern maintaining the balance in these dual-mechanisms.
    Date
    22. 7.2022 12:16:58
  13. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.02
    0.015741175 = product of:
      0.041976467 = sum of:
        0.01711173 = weight(_text_:use in 4746) [ClassicSimilarity], result of:
          0.01711173 = score(doc=4746,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.13532647 = fieldWeight in 4746, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=4746)
        0.020450631 = weight(_text_:of in 4746) [ClassicSimilarity], result of:
          0.020450631 = score(doc=4746,freq=42.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.31669703 = fieldWeight in 4746, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=4746)
        0.004414106 = product of:
          0.008828212 = sum of:
            0.008828212 = weight(_text_:on in 4746) [ClassicSimilarity], result of:
              0.008828212 = score(doc=4746,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.097201325 = fieldWeight in 4746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4746)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  14. Strong, R.W.: Undergraduates' information differentiation behaviors in a research process : a grounded theory approach (2005) 0.02
    0.015556354 = product of:
      0.04148361 = sum of:
        0.01711173 = weight(_text_:use in 5985) [ClassicSimilarity], result of:
          0.01711173 = score(doc=5985,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.13532647 = fieldWeight in 5985, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=5985)
        0.019957775 = weight(_text_:of in 5985) [ClassicSimilarity], result of:
          0.019957775 = score(doc=5985,freq=40.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.3090647 = fieldWeight in 5985, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=5985)
        0.004414106 = product of:
          0.008828212 = sum of:
            0.008828212 = weight(_text_:on in 5985) [ClassicSimilarity], result of:
              0.008828212 = score(doc=5985,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.097201325 = fieldWeight in 5985, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5985)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    This research explores, using a Grounded Theory approach, the question of how a particular group of undergraduate university students differentiates the values of retrieved information in a contemporary research process. Specifically it attempts to isolate and label those specific techniques, processes, formulae-both objective and subjective-that the students use to identify, prioritize, and successfully incorporate the most useful and valuable information into their research project. The research reviews the relevant literature covering the areas of: epistemology, knowledge acquisition, and cognitive learning theory; early relevance research; the movement from relevance models to information seeking in context; and the proximate recent research. A research methodology is articulated using a Grounded Theory approach, and the research process and research participants are fully explained and described. The findings of the research are set forth using three Thematic Sets- Traditional Relevance Measures; Structural Frames; and Metaphors: General and Ecological-using the actual discourse of the study participants, and a theoretical construct is advanced. Based on that construct, it can be theorized that identification and analysis of the metaphorical language that the particular students in this study used, both by way of general and ecological metaphors-their stories-about how they found, handled, and evaluated information, can be a very useful tool in understanding how the students identified, prioritized, and successfully incorporated the most useful and relevant information into their research projects. It also is argued that this type of metaphorical analysis could be useful in providing a bridging mechanism for a broader understanding of the relationships between traditional user relevance studies and the concepts of frame theory and sense-making. Finally, a corollary to Whitmire's original epistemological hypothesis is posited: Students who were more adept at using metaphors-either general or ecological-appeared more comfortable with handling contradictory information sources, and better able to articulate their valuing decisions. The research concludes with a discussion of the implications for both future research in the Library and Information Science field, and for the practice of both Library professionals and classroom instructors involved in assisting students involved in information valuing decision-making in a research process.
    Content
    Dissertation Presented to the Faculty of the Graduate School of The University of Texas at Austin in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy. Vgl. unter: http://dspace.lib.utexas.edu/bitstream/2152/701/1/strongr80063.pdf.
    Imprint
    Austin, TX : University of Texas at Austin
  15. Schneider, A.: Moderne Retrievalverfahren in klassischen bibliotheksbezogenen Anwendungen : Projekte und Perspektiven (2008) 0.01
    0.012160879 = product of:
      0.048643515 = sum of:
        0.04418082 = weight(_text_:retrieval in 4031) [ClassicSimilarity], result of:
          0.04418082 = score(doc=4031,freq=14.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.3536936 = fieldWeight in 4031, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=4031)
        0.004462694 = weight(_text_:of in 4031) [ClassicSimilarity], result of:
          0.004462694 = score(doc=4031,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.06910896 = fieldWeight in 4031, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=4031)
      0.25 = coord(2/8)
    
    Abstract
    Die vorliegende Arbeit beschäftigt sich mit modernen Retrievalverfahren in klassischen bibliotheksbezogenen Anwendungen. Wie die Verbindung der beiden gegensätzlich scheinenden Wortgruppen im Titel zeigt, werden in der Arbeit Aspekte aus der Informatik bzw. Informationswissenschaft mit Aspekten aus der Bibliothekstradition verknüpft. Nach einer kurzen Schilderung der Ausgangslage, der so genannten Informationsflut, im ersten Kapitel stellt das zweite Kapitel eine Einführung in die Theorie des Information Retrieval dar. Im Einzelnen geht es um die Grundlagen von Information Retrieval und Information-Retrieval-Systemen sowie um die verschiedenen Möglichkeiten der Informationserschließung. Hier werden Formal- und Sacherschließung, Indexierung und automatische Indexierung behandelt. Des Weiteren werden im Rahmen der Theorie des Information Retrieval unterschiedliche Information-Retrieval-Modelle und die Evaluation durch Retrievaltests vorgestellt. Nach der Theorie folgt im dritten Kapitel die Praxis des Information Retrieval. Es werden die organisationsinterne Anwendung, die Anwendung im Informations- und Dokumentationsbereich sowie die Anwendung im Bibliotheksbereich unterschieden. Die organisationsinterne Anwendung wird durch das Beispiel der Datenbank KURS zur Aus- und Weiterbildung veranschaulicht. Die Anwendung im Bibliotheksbereich bezieht sich in erster Linie auf den OPAC als Kompromiss zwischen bibliothekarischer Indexierung und Endnutzeranforderungen und auf seine Anreicherung (sog. Catalogue Enrichment), um das Retrieval zu verbessern. Der Bibliotheksbereich wird ausführlicher behandelt, indem ein Rückblick auf abgeschlossene Projekte zu Informations- und Indexierungssystemen aus den Neunziger Jahren (OSIRIS, MILOS I und II, KASCADE) sowie ein Einblick in aktuelle Projekte gegeben werden. In den beiden folgenden Kapiteln wird je ein aktuelles Projekt zur Verbesserung des Retrievals durch Kataloganreicherung, automatische Erschließung und fortschrittliche Retrievalverfahren präsentiert: das Suchportal dandelon.com und das 180T-Projekt des Hochschulbibliothekszentrums des Landes Nordrhein-Westfalen. Hierbei werden jeweils Projektziel, Projektpartner, Projektorganisation, Projektverlauf und die verwendete Technologie vorgestellt. Die Projekte unterscheiden sich insofern, dass in dem einen Fall eine große Verbundzentrale die Projektkoordination übernimmt, im anderen Fall jede einzelne teilnehmende Bibliothek selbst für die Durchführung verantwortlich ist. Im sechsten und letzten Kapitel geht es um das Fazit und die Perspektiven. Es werden sowohl die beiden beschriebenen Projekte bewertet als auch ein Ausblick auf Entwicklungen bezüglich des Bibliothekskatalogs gegeben. Diese Veröffentlichung geht zurück auf eine Master-Arbeit im postgradualen Fernstudiengang Master of Arts (Library and Information Science) an der Humboldt-Universität zu Berlin.
  16. Lehrke, C.: Architektur von Suchmaschinen : Googles Architektur, insb. Crawler und Indizierer (2005) 0.01
    0.01087667 = product of:
      0.04350668 = sum of:
        0.029519552 = weight(_text_:retrieval in 867) [ClassicSimilarity], result of:
          0.029519552 = score(doc=867,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23632148 = fieldWeight in 867, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=867)
        0.013987125 = product of:
          0.02797425 = sum of:
            0.02797425 = weight(_text_:22 in 867) [ClassicSimilarity], result of:
              0.02797425 = score(doc=867,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19345059 = fieldWeight in 867, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=867)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Das Internet mit seinen ständig neuen Usern und seinem extremen Wachstum bringt viele neue Herausforderungen mit sich. Aufgrund dieses Wachstums bedienen sich die meisten Leute der Hilfe von Suchmaschinen um Inhalte innerhalb des Internet zu finden. Suchmaschinen nutzen für die Beantwortung der User-Anfragen Information Retrieval Techniken. Problematisch ist nur, dass traditionelle Information Retrieval (IR) Systeme für eine relativ kleine und zusammenhängende Sammlung von Dokumenten entwickelt wurden. Das Internet hingegen unterliegt einem ständigen Wachstum, schnellen Änderungsraten und es ist über geographisch verteilte Computer verteilt. Aufgrund dieser Tatsachen müssen die alten Techniken erweitert oder sogar neue IRTechniken entwickelt werden. Eine Suchmaschine die diesen Herausforderungen vergleichsweise erfolgreich entgegnet ist Google. Ziel dieser Arbeit ist es aufzuzeigen, wie Suchmaschinen funktionieren. Der Fokus liegt dabei auf der Suchmaschine Google. Kapitel 2 wird sich zuerst mit dem Aufbau von Suchmaschinen im Allgemeinen beschäftigen, wodurch ein grundlegendes Verständnis für die einzelnen Komponenten geschaffen werden soll. Im zweiten Teil des Kapitels wird darauf aufbauend ein Überblick über die Architektur von Google gegeben. Kapitel 3 und 4 dienen dazu, näher auf die beiden Komponenten Crawler und Indexer einzugehen, bei denen es sich um zentrale Elemente im Rahmen von Suchmaschinen handelt.
    Pages
    22 S
  17. López Vargas, M.A.: "Ilmenauer Verteiltes Information REtrieval System" (IVIRES) : eine neue Architektur zur Informationsfilterung in einem verteilten Information Retrieval System (2002) 0.01
    0.008855866 = product of:
      0.07084693 = sum of:
        0.07084693 = weight(_text_:retrieval in 4041) [ClassicSimilarity], result of:
          0.07084693 = score(doc=4041,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.5671716 = fieldWeight in 4041, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4041)
      0.125 = coord(1/8)
    
  18. Mair, M.: Increasing the value of meta data by using associative semantic networks (2002) 0.01
    0.008783599 = product of:
      0.035134397 = sum of:
        0.025667597 = weight(_text_:use in 4972) [ClassicSimilarity], result of:
          0.025667597 = score(doc=4972,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 4972, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=4972)
        0.009466803 = weight(_text_:of in 4972) [ClassicSimilarity], result of:
          0.009466803 = score(doc=4972,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.14660224 = fieldWeight in 4972, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4972)
      0.25 = coord(2/8)
    
    Abstract
    Momentan verbreitete Methoden zur Strukturierung von Information können ihre Aufgabe immer schlechter befriedigend erfüllen. Der Grund dafür ist das explosive Wachstum menschlichen Wissens. Diese Diplomarbeit schlägt als einen möglichen Ausweg die Verwendung assoziativer semantischer Netzwerke vor. Maschinelles Wissensmanagement kann wesentlich intuitiver und einfacher benutzbar werden, wenn man sich die Art und Weise zunutze macht, mit der das menschliche Gehirn Informationen verarbeitet (im Speziellen assoziative Verbindungen). Der theoretische Teil dieser Arbeit diskutiert verschiedene Aspekte eines möglichen Designs eines semantischen Netzwerks mit assoziativen Verbindungen. Außer den Grundelementen und Problemen der Visualisierung werden hauptsächlich Verbesserungen ausgearbeitet, welche ein leistungsstarkes Arbeiten mit einem solchen Netzwerk erlauben. Im praktischen Teil wird ein Netzwerk-Prototyp mit den wichtigsten herausgearbeiteten Merkmalen implementiert. Die Basis der Applikation bildet der Hyperwave Information Server. Dieser detailiiertere Design-Teil gewährt tieferen Einblick in Software Requirements, Use Cases und teilweise auch in Klassendetails. Am Ende wird eine kurze Einführung in die Benutzung des implementierten Prototypen gegeben.
    Imprint
    Graz : University of Technology
  19. Müller, T.: Wissensrepräsentation mit semantischen Netzen im Bereich Luftfahrt (2006) 0.01
    0.00871515 = product of:
      0.0348606 = sum of:
        0.020873476 = weight(_text_:retrieval in 1670) [ClassicSimilarity], result of:
          0.020873476 = score(doc=1670,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.16710453 = fieldWeight in 1670, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1670)
        0.013987125 = product of:
          0.02797425 = sum of:
            0.02797425 = weight(_text_:22 in 1670) [ClassicSimilarity], result of:
              0.02797425 = score(doc=1670,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19345059 = fieldWeight in 1670, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1670)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Es ist ein semantisches Netz für den Gegenstandsbereich Luftfahrt modelliert worden, welches Unternehmensinformationen, Organisationen, Fluglinien, Flughäfen, etc. enthält, Diese sind 10 Hauptkategorien zugeordnet worden, die untergliedert nach Facetten sind. Die Begriffe des Gegenstandsbereiches sind mit 23 unterschiedlichen Relationen verknüpft worden (Z. B.: 'hat Standort in', bietet an, 'ist Homebase von', etc). Der Schwerpunkt der Betrachtung liegt auf dem Unterschied zwischen den drei klassischen Standardrelationen und den zusätzlich eingerichteten Relationen, bezüglich ihrem Nutzen für ein effizientes Retrieval. Die angelegten Kategorien und Relationen sind sowohl für eine kognitive als auch für eine maschinelle Verarbeitung geeignet.
    Date
    26. 9.2006 21:00:22
  20. Scherer, B.: Automatische Indexierung und ihre Anwendung im DFG-Projekt "Gemeinsames Portal für Bibliotheken, Archive und Museen (BAM)" (2003) 0.01
    0.0066129607 = product of:
      0.026451843 = sum of:
        0.020873476 = weight(_text_:retrieval in 4283) [ClassicSimilarity], result of:
          0.020873476 = score(doc=4283,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.16710453 = fieldWeight in 4283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4283)
        0.0055783675 = weight(_text_:of in 4283) [ClassicSimilarity], result of:
          0.0055783675 = score(doc=4283,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.086386204 = fieldWeight in 4283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4283)
      0.25 = coord(2/8)
    
    Abstract
    Automatische Indexierung verzeichnet schon seit einigen Jahren aufgrund steigender Informationsflut ein wachsendes Interesse. Allerdings gibt es immer noch Vorbehalte gegenüber der intellektuellen Indexierung in Bezug auf Qualität und größerem Aufwand der Systemimplementierung bzw. -pflege. Neuere Entwicklungen aus dem Bereich des Wissensmanagements, wie beispielsweise Verfahren aus der Künstlichen Intelligenz, der Informationsextraktion, dem Text Mining bzw. der automatischen Klassifikation sollen die automatische Indexierung aufwerten und verbessern. Damit soll eine intelligentere und mehr inhaltsbasierte Erschließung geleistet werden. In dieser Masterarbeit wird außerhalb der Darstellung von Grundlagen und Verfahren der automatischen Indexierung sowie neueren Entwicklungen auch Möglichkeiten der Evaluation dargestellt. Die mögliche Anwendung der automatischen Indexierung im DFG-ProjektGemeinsames Portal für Bibliotheken, Archive und Museen (BAM)" bilden den Schwerpunkt der Arbeit. Im Portal steht die bibliothekarische Erschließung von Texten im Vordergrund. In einem umfangreichen Test werden drei deutsche, linguistische Systeme mit statistischen Verfahren kombiniert (die aber teilweise im System bereits integriert ist) und evaluiert, allerdings nur auf der Basis der ausgegebenen Indexate. Abschließend kann festgestellt werden, dass die Ergebnisse und damit die Qualität (bezogen auf die Indexate) von intellektueller und automatischer Indexierung noch signifikant unterschiedlich sind. Die Gründe liegen in noch zu lösenden semantischen Problemen bzw, in der Obereinstimmung mit Worten aus einem Thesaurus, die von einem automatischen Indexierungssystem nicht immer nachvollzogen werden kann. Eine Inhaltsanreicherung mit den Indexaten zum Vorteil beim Retrieval kann, je nach System oder auch über die Einbindung durch einen Thesaurus, erreicht werden.
    Footnote
    Masterarbeit im Studiengang Information Engineering zur Erlagung des Grades eines Master of Science in Information science,

Languages

  • d 52
  • e 18

Types