Search (22 results, page 1 of 2)

  • × type_ss:"x"
  • × year_i:[2000 TO 2010}
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.10
    0.098102 = product of:
      0.196204 = sum of:
        0.049051 = product of:
          0.147153 = sum of:
            0.147153 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.147153 = score(doc=701,freq=2.0), product of:
                0.3927445 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046325076 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.147153 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.147153 = score(doc=701,freq=2.0), product of:
            0.3927445 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046325076 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Simon, D.: Anreicherung bibliothekarischer Titeldaten durch Tagging : Möglichkeiten und Probleme (2007) 0.02
    0.020141546 = product of:
      0.08056618 = sum of:
        0.08056618 = weight(_text_:social in 530) [ClassicSimilarity], result of:
          0.08056618 = score(doc=530,freq=4.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.43614143 = fieldWeight in 530, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0546875 = fieldNorm(doc=530)
      0.25 = coord(1/4)
    
    Abstract
    Die Arbeit ist die untersucht dier Möglichkeiten von Tagging-Verfahren im Kontext bibliothekarischer Erschließung. Der Verfasser führt dazu in das Thema Social Tagging bzw. Folksonomy ein und erklärt die Funktionsweise von Tagging-Systemen. Die Untersuchung stützt sich im wesentlichen auf eine Analyse des KölnerUniversitätsGesamtkatalogs (KUG), der direktes Tagging durch Katalognutzer ebenso ermöglicht wie die Übernahme von Katalogeinträgen für das System BibSonomy. KUG und BibSonomy werden daher mit ihren Eigenschaften vorgestellt, bevor eine bewertende Analyse der Taggingmöglichkeiten und deren bisheriger tatsächlicher Nutzung vorgenommen wird. Dabei untersucht der Verfasser auch den möglichen Beitrag von Tagging-Verfahren in Ergänzung zu den Ergebnissen von Verfahren der inhaltlichen Erschließung und automatischen Indexierung.
    Theme
    Social tagging
  3. Kirk, J.: Theorising information use : managers and their work (2002) 0.01
    0.0142422225 = product of:
      0.05696889 = sum of:
        0.05696889 = weight(_text_:social in 560) [ClassicSimilarity], result of:
          0.05696889 = score(doc=560,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.30839854 = fieldWeight in 560, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0546875 = fieldNorm(doc=560)
      0.25 = coord(1/4)
    
    Imprint
    Sydney : University of Technology / Faculty of Humanities and Social Sciences
  4. Sperling, R.: Anlage von Literaturreferenzen für Onlineressourcen auf einer virtuellen Lernplattform (2004) 0.01
    0.010983714 = product of:
      0.043934856 = sum of:
        0.043934856 = product of:
          0.08786971 = sum of:
            0.08786971 = weight(_text_:22 in 4635) [ClassicSimilarity], result of:
              0.08786971 = score(doc=4635,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.5416616 = fieldWeight in 4635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4635)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    26.11.2005 18:39:22
  5. Carlin, S.A.: Schlagwortvergabe durch Nutzende (Tagging) als Hilfsmittel zur Suche im Web : Ansatz, Modelle, Realisierungen (2006) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 2476) [ClassicSimilarity], result of:
          0.040692065 = score(doc=2476,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 2476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2476)
      0.25 = coord(1/4)
    
    Theme
    Social tagging
  6. Milanesi, C.: Möglichkeiten der Kooperation im Rahmen von Subject Gateways : das Euler-Projekt im Vergleich mit weiteren europäischen Projekten (2001) 0.01
    0.009414612 = product of:
      0.03765845 = sum of:
        0.03765845 = product of:
          0.0753169 = sum of:
            0.0753169 = weight(_text_:22 in 4865) [ClassicSimilarity], result of:
              0.0753169 = score(doc=4865,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.46428138 = fieldWeight in 4865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4865)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2002 19:41:59
  7. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.007845511 = product of:
      0.031382043 = sum of:
        0.031382043 = product of:
          0.062764086 = sum of:
            0.062764086 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.062764086 = score(doc=3406,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 5.2010 16:22:35
  8. Francu, V.: Multilingual access to information using an intermediate language (2003) 0.01
    0.0073936307 = product of:
      0.029574523 = sum of:
        0.029574523 = product of:
          0.059149045 = sum of:
            0.059149045 = weight(_text_:aspects in 1742) [ClassicSimilarity], result of:
              0.059149045 = score(doc=1742,freq=4.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.28249177 = fieldWeight in 1742, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1742)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    While being theoretically so widely available, information can be restricted from a more general use by linguistic barriers. The linguistic aspects of the information languages and particularly the chances of an enhanced access to information by means of multilingual access facilities will make the substance of this thesis. The main problem of this research is thus to demonstrate that information retrieval can be improved by using multilingual thesaurus terms based on an intermediate or switching language to search with. Universal classification systems in general can play the role of switching languages for reasons dealt with in the forthcoming pages. The Universal Decimal Classification (UDC) in particular is the classification system used as example of a switching language for our objectives. The question may arise: why a universal classification system and not another thesaurus? Because the UDC like most of the classification systems uses symbols. Therefore, it is language independent and the problems of compatibility between such a thesaurus and different other thesauri in different languages are avoided. Another question may still arise? Why not then, assign running numbers to the descriptors in a thesaurus and make a switching language out of the resulting enumerative system? Because of some other characteristics of the UDC: hierarchical structure and terminological richness, consistency and control. One big problem to find an answer to is: can a thesaurus be made having as a basis a classification system in any and all its parts? To what extent this question can be given an affirmative answer? This depends much on the attributes of the universal classification system which can be favourably used to this purpose. Examples of different situations will be given and discussed upon beginning with those classes of UDC which are best fitted for building a thesaurus structure out of them (classes which are both hierarchical and faceted)...
    Content
    Inhalt: INFORMATION LANGUAGES: A LINGUISTIC APPROACH MULTILINGUAL ASPECTS IN INFORMATION STORAGE AND RETRIEVAL COMPATIBILITY AND CONVERTIBILITY OF INFORMATION LANGUAGES CURRENT TRENDS IN MULTILINGUAL ACCESS BUILDING UDC-BASED MULTILINGUAL THESAURI ONLINE APPLICATIONS OF THE UDC-BASED MULTILINGUAL THESAURI THE IMPACT OF SPECIFICITY ON THE RETRIEVAL POWER OF A UDC-BASED MULTILINGUAL THESAURUS FINAL REMARKS AND GENERAL CONCLUSIONS Proefschrift voorgelegd tot het behalen van de graad van doctor in de Taal- en Letterkunde aan de Universiteit Antwerpen. - Vgl.: http://dlist.sir.arizona.edu/1862/.
  9. Hoffmann, R.: Mailinglisten für den bibliothekarischen Informationsdienst am Beispiel von RABE (2000) 0.01
    0.0066571366 = product of:
      0.026628546 = sum of:
        0.026628546 = product of:
          0.053257093 = sum of:
            0.053257093 = weight(_text_:22 in 4441) [ClassicSimilarity], result of:
              0.053257093 = score(doc=4441,freq=4.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.32829654 = fieldWeight in 4441, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4441)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 2.2000 10:25:05
    Series
    Kölner Arbeitspapiere zur Bibliotheks- und Informationswissenschaft; Bd.22
  10. Eckert, K.: Thesaurus analysis and visualization in semantic search applications (2007) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 3222) [ClassicSimilarity], result of:
              0.052280862 = score(doc=3222,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 3222, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3222)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The use of thesaurus-based indexing is a common approach for increasing the performance of information retrieval. In this thesis, we examine the suitability of a thesaurus for a given set of information and evaluate improvements of existing thesauri to get better search results. On this area, we focus on two aspects: 1. We demonstrate an analysis of the indexing results achieved by an automatic document indexer and the involved thesaurus. 2. We propose a method for thesaurus evaluation which is based on a combination of statistical measures and appropriate visualization techniques that support the detection of potential problems in a thesaurus. In this chapter, we give an overview of the context of our work. Next, we briefly outline the basics of thesaurus-based information retrieval and describe the Collexis Engine that was used for our experiments. In Chapter 3, we describe two experiments in automatically indexing documents in the areas of medicine and economics with corresponding thesauri and compare the results to available manual annotations. Chapter 4 describes methods for assessing thesauri and visualizing the result in terms of a treemap. We depict examples of interesting observations supported by the method and show that we actually find critical problems. We conclude with a discussion of open questions and future research in Chapter 5.
  11. Thielemann, A.: Sacherschließung für die Kunstgeschichte : Möglichkeiten und Grenzen von DDC 700: The Arts (2007) 0.01
    0.006276408 = product of:
      0.025105633 = sum of:
        0.025105633 = product of:
          0.050211266 = sum of:
            0.050211266 = weight(_text_:22 in 1409) [ClassicSimilarity], result of:
              0.050211266 = score(doc=1409,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.30952093 = fieldWeight in 1409, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1409)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Nach der Veröffentlichung einer deutschen Übersetzung der Dewey Decimal Classification 22 im Oktober 2005 und ihrer Nutzung zur Inhaltserschließung in der Deutschen Nationalbibliographie seit Januar 2006 stellt sich aus Sicht der deutschen kunsthistorischen Spezialbibliotheken die Frage nach einer möglichen Verwendung der DDC und ihrer generellen Eignung zur Inhalterschließung kunsthistorischer Publikationen. Diese Frage wird vor dem Hintergrund der bestehenden bibliothekarischen Strukturen für die Kunstgeschichte sowie mit Blick auf die inhaltlichen Besonderheiten, die Forschungsmethodik und die publizistischen Traditionen dieses Faches erörtert.
  12. Schwarz, K.: Domain model enhanced search : a comparison of taxonomy, thesaurus and ontology (2005) 0.01
    0.005228086 = product of:
      0.020912344 = sum of:
        0.020912344 = product of:
          0.041824687 = sum of:
            0.041824687 = weight(_text_:aspects in 4569) [ClassicSimilarity], result of:
              0.041824687 = score(doc=4569,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19975184 = fieldWeight in 4569, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4569)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The results of this thesis are intended to support the information architect in designing a solution for improved search in a corporate environment. Specifically we have examined the type of search problems that require a domain model to enhance the search process. There are several approaches to modeling a domain. We have considered 3 different types of domain modeling schemes; taxonomy, thesaurus and ontology. The intention is to support the information architect in making an informed choice between one or more of these schemes. In our opinion the main criteria for this choice are the modeling characteristics of a scheme and the suitability for application in the search process. The second chapter is a discussion of modeling characteristics of each scheme, followed by a comparison between them. This should give an information architect an idea of which aspects of a domain can be modeled with each scheme. What is missing here is an indication of the effort required to model a domain with each scheme. There are too many factors that influence the amount of required effort, ranging from measurable factors like domain size and resource characteristics to cultural matters such as the willingness to share knowledge and the existence of a project champion in the team to keep the project running. The third chapter shows what role domain models can play in each part of the search process. This gives an idea of the problems that domain models can solve. We have split the search process into individual parts to show that domain models can be applied very differently in the process. The fourth chapter makes recommendations about the suitability of each individualdomain modeling scheme for improving search. Each scheme has particular characteristics that make it especially suitable for a domain or a search problem. In the appendix each case study is described in detail. These descriptions are intended to serve as a benchmark. The current problem of the enterprise can be compared to those described to see which case study is most similar, which solution was chosen, which problems arose and how they were dealt with. An important issue that we have not touched upon in this thesis is that of maintenance. The real problems of a domain model are revealed when it is applied in a search system and its deficits and wrong assumptions become clear. Adaptation and maintenance are always required. Unfortunately we have not been able to glean sufficient information about maintenance issues from our case studies to draw any meaningful conclusions.
  13. Tzitzikas, Y.: Collaborative ontology-based information indexing and retrieval (2002) 0.01
    0.005228086 = product of:
      0.020912344 = sum of:
        0.020912344 = product of:
          0.041824687 = sum of:
            0.041824687 = weight(_text_:aspects in 2281) [ClassicSimilarity], result of:
              0.041824687 = score(doc=2281,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19975184 = fieldWeight in 2281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2281)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    An information system like the Web is a continuously evolving system consisting of multiple heterogeneous information sources, covering a wide domain of discourse, and a huge number of users (human or software) with diverse characteristics and needs, that produce and consume information. The challenge nowadays is to build a scalable information infrastructure enabling the effective, accurate, content based retrieval of information, in a way that adapts to the characteristics and interests of the users. The aim of this work is to propose formally sound methods for building such an information network based on ontologies which are widely used and are easy to grasp by ordinary Web users. The main results of this work are: - A novel scheme for indexing and retrieving objects according to multiple aspects or facets. The proposed scheme is a faceted scheme enriched with a method for specifying the combinations of terms that are valid. We give a model-theoretic interpretation to this model and we provide mechanisms for inferring the valid combinations of terms. This inference service can be exploited for preventing errors during the indexing process, which is very important especially in the case where the indexing is done collaboratively by many users, and for deriving "complete" navigation trees suitable for browsing through the Web. The proposed scheme has several advantages over the hierarchical classification schemes currently employed by Web catalogs, namely, conceptual clarity (it is easier to understand), compactness (it takes less space), and scalability (the update operations can be formulated more easily and be performed more effciently). - A exible and effecient model for building mediators over ontology based information sources. The proposed mediators support several modes of query translation and evaluation which can accommodate various application needs and levels of answer quality. The proposed model can be used for providing users with customized views of Web catalogs. It can also complement the techniques for building mediators over relational sources so as to support approximate translation of partially ordered domain values.
  14. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.00
    0.004707306 = product of:
      0.018829225 = sum of:
        0.018829225 = product of:
          0.03765845 = sum of:
            0.03765845 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
              0.03765845 = score(doc=1746,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.23214069 = fieldWeight in 1746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1746)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2015 9:17:30
  15. Markó, K.G.: Foundation, implementation and evaluation of the MorphoSaurus system (2008) 0.00
    0.0045745755 = product of:
      0.018298302 = sum of:
        0.018298302 = product of:
          0.036596604 = sum of:
            0.036596604 = weight(_text_:aspects in 4415) [ClassicSimilarity], result of:
              0.036596604 = score(doc=4415,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.17478286 = fieldWeight in 4415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4415)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The proper handling of acronyms plays a crucial role in medical texts, e.g. in patient records, as well as in scientific literature. Chapter six presents an approach, in which acronyms are automatically acquired from (bio-) medical literature. Furthermore, acronyms and their definitions in different languages are linked to each other using the MorphoSaurus text processing system. Automatic word sense disambiguation is still one of the most challenging tasks in Natural Language Processing. In Chapter seven, cross-lingual considerations lead to a new methodology for automatic disambiguation applied to subwords. Beginning with Chapter eight, a series of applications based onMorphoSaurus are introduced. Firstly, the implementation of the subword approach within a crosslanguage information retrieval setting for the medical domain is described and evaluated on standard test document collections. In Chapter nine, this methodology is extended to multilingual information retrieval in the Web, for which user queries are translated into target languages based on the segmentation into subwords and their interlingual mappings. The cross-lingual, automatic assignment of document descriptors to documents is the topic of Chapter ten. A large-scale evaluation of a heuristic, as well as a statistical algorithm is carried out using a prominent medical thesaurus as a controlled vocabulary. In Chapter eleven, it will be shown how MorphoSaurus can be used to map monolingual, lexical resources across different languages. As a result, a large multilingual medical lexicon with high coverage and complete lexical information is built and evaluated against a comparable, already available and commonly used lexical repository for the medical domain. Chapter twelve sketches a few applications based on MorphoSaurus. The generality and applicability of the subword approach to other domains is outlined, and proof-of-concepts in real-world scenarios are presented. Finally, Chapter thirteen recapitulates the most important aspects of MorphoSaurus and the potential benefit of its employment in medical information systems is carefully assessed, both for medical experts in their everyday life, but also with regard to health care consumers and their existential information needs.
  16. Witschel, H.F.: Global and local resources for peer-to-peer text retrieval (2008) 0.00
    0.0045745755 = product of:
      0.018298302 = sum of:
        0.018298302 = product of:
          0.036596604 = sum of:
            0.036596604 = weight(_text_:aspects in 127) [ClassicSimilarity], result of:
              0.036596604 = score(doc=127,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.17478286 = fieldWeight in 127, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=127)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This thesis is organised as follows: Chapter 2 gives a general introduction to the field of information retrieval, covering its most important aspects. Further, the tasks of distributed and peer-to-peer information retrieval (P2PIR) are introduced, motivating their application and characterising the special challenges that they involve, including a review of existing architectures and search protocols in P2PIR. Finally, chapter 2 presents approaches to evaluating the e ectiveness of both traditional and peer-to-peer IR systems. Chapter 3 contains a detailed account of state-of-the-art information retrieval models and algorithms. This encompasses models for matching queries against document representations, term weighting algorithms, approaches to feedback and associative retrieval as well as distributed retrieval. It thus defines important terminology for the following chapters. The notion of "multi-level association graphs" (MLAGs) is introduced in chapter 4. An MLAG is a simple, graph-based framework that allows to model most of the theoretical and practical approaches to IR presented in chapter 3. Moreover, it provides an easy-to-grasp way of defining and including new entities into IR modeling, such as paragraphs or peers, dividing them conceptually while at the same time connecting them to each other in a meaningful way. This allows for a unified view on many IR tasks, including that of distributed and peer-to-peer search. Starting from related work and a formal defiition of the framework, the possibilities of modeling that it provides are discussed in detail, followed by an experimental section that shows how new insights gained from modeling inside the framework can lead to novel combinations of principles and eventually to improved retrieval effectiveness.
  17. Buß, M.: Unternehmenssprache in internationalen Unternehmen : Probleme des Informationstransfers in der internen Kommunikation (2005) 0.00
    0.0039227554 = product of:
      0.015691021 = sum of:
        0.015691021 = product of:
          0.031382043 = sum of:
            0.031382043 = weight(_text_:22 in 1482) [ClassicSimilarity], result of:
              0.031382043 = score(doc=1482,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19345059 = fieldWeight in 1482, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1482)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 5.2005 18:25:26
  18. Düring, M.: ¬Die Dewey Decimal Classification : Entstehung, Aufbau und Ausblick auf eine Nutzung in deutschen Bibliotheken (2003) 0.00
    0.0039227554 = product of:
      0.015691021 = sum of:
        0.015691021 = product of:
          0.031382043 = sum of:
            0.031382043 = weight(_text_:22 in 2460) [ClassicSimilarity], result of:
              0.031382043 = score(doc=2460,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19345059 = fieldWeight in 2460, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2460)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Die ständig steigende Zahl an publizierter Information in immer neuen Formen verlangt besonders von Informations- und Dokumentationseinrichtungen immer präzisere Lösungen zur Erschließung dieser Informationen und ihrer benutzerfreundlichen Aufbereitung. Besonders im derzeitigen Zeitalter der Datenbanken und Online-Kataloge ist die Kombination von verbaler und klassifikatorischer Sacherschließung gefordert, ohne dabei die Verbindung zu den älteren, vielerorts noch (zumindest zusätzlich) in Verwendung befindlichen, Zettelkatalogen zu verlieren. Weltweit ist eine Vielzahl an verschiedenen Klassifikationen im Einsatz. Die Wahl der für eine Einrichtung passenden Klassifikation ist abhängig von ihrer thematischen und informationellen Ausrichtung, der Größe und Art der Bestände und nicht zuletzt von technischen und personellen Voraussetzungen. Auf Seiten der zu wählenden Klassifikation sind die Einfachheit der Handhabung für den Bibliothekar, die Verständlichkeit für den Benutzer, die Erweiterungsfähigkeit der Klassifikation durch das Aufkommen neuer Wissensgebiete und die Einbindung in informationelle Netze mit anderen Einrichtungen von entscheidender Bedeutung. In dieser Arbeit soll die Dewey Dezimalklassifikation (DDC) hinsichtlich dieser Punkte näher beleuchtet werden. Sie ist die weltweit am häufigsten benutzte Klassifikation. Etwa 200.000 Bibliotheken in 135 Ländern erschließen ihre Bestände mit diesem System. Sie liegt derzeit bereits in der 22. ungekürzten Auflage vor und wurde bisher in 30 Sprachen übersetzt. Eine deutsche Komplettübersetzung wird im Jahre 2005 erscheinen. Trotz teils heftig geführter Standardisierungsdebatten und Plänen für die Übernahme von amerikanischen Formalerschließungsregeln herrscht in Bezug auf die Sacherschließung unter deutschen Bibliotheken wenig Einigkeit. Die DDC ist in Deutschland und anderen europäischen Ländern kaum verbreitet, sieht von Großbritannien und von der Verwendung in Bibliografien ab. Diese Arbeit geht demzufolge auf die historischen Gründe dieser Entwicklung ein und wagt einen kurzen Ausblick in die Zukunft der Dezimalklassifikation.
  19. Westermeyer, D.: Adaptive Techniken zur Informationsgewinnung : der Webcrawler InfoSpiders (2005) 0.00
    0.0039227554 = product of:
      0.015691021 = sum of:
        0.015691021 = product of:
          0.031382043 = sum of:
            0.031382043 = weight(_text_:22 in 4333) [ClassicSimilarity], result of:
              0.031382043 = score(doc=4333,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19345059 = fieldWeight in 4333, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4333)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Pages
    22 S
  20. Lehrke, C.: Architektur von Suchmaschinen : Googles Architektur, insb. Crawler und Indizierer (2005) 0.00
    0.0039227554 = product of:
      0.015691021 = sum of:
        0.015691021 = product of:
          0.031382043 = sum of:
            0.031382043 = weight(_text_:22 in 867) [ClassicSimilarity], result of:
              0.031382043 = score(doc=867,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19345059 = fieldWeight in 867, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=867)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Pages
    22 S