Search (98 results, page 5 of 5)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  • × year_i:[2000 TO 2010}
  1. Cool, C.; Spink, A.: Issues of context in information retrieval (IR) : an introduction to the special issue (2002) 0.00
    0.0011642005 = product of:
      0.002328401 = sum of:
        0.002328401 = product of:
          0.006985203 = sum of:
            0.006985203 = weight(_text_:a in 2587) [ClassicSimilarity], result of:
              0.006985203 = score(doc=2587,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.13239266 = fieldWeight in 2587, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2587)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The subject of context has received a great deal of attention in the information retrieval (IR) literature over the past decade, primarily in studies of information seeking and IR interactions. Recently, attention to context in IR has expanded to address new problems in new environments. In this paper we outline five overlapping dimensions of context which we believe to be important constituent elements and we discuss how they are related to different issues in IR research. The papers in this special issue are summarized with respect to how they represent work that is being conducted within these dimensions of context. We conclude with future areas of research which are needed in order to fully understand the multidimensional nature of context in IR.
    Type
    a
  2. Menczer, F.: Lexical and semantic clustering by Web links (2004) 0.00
    0.0011642005 = product of:
      0.002328401 = sum of:
        0.002328401 = product of:
          0.006985203 = sum of:
            0.006985203 = weight(_text_:a in 3090) [ClassicSimilarity], result of:
              0.006985203 = score(doc=3090,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.13239266 = fieldWeight in 3090, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3090)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Recent Web-searching and -mining tools are combining text and link analysis to improve ranking and crawling algorithms. The central assumption behind such approaches is that there is a correiation between the graph structure of the Web and the text and meaning of pages. Here I formalize and empirically evaluate two general conjectures drawing connections from link information to lexical and semantic Web content. The link-content conjecture states that a page is similar to the pages that link to it, and the link-cluster conjecture that pages about the same topic are clustered together. These conjectures are offen simply assumed to hold, and Web search tools are built an such assumptions. The present quantitative confirmation sheds light an the connection between the success of the latest Web-mining techniques and the small world topology of the Web, with encouraging implications for the design of better crawling algorithms.
    Type
    a
  3. Zazo, A.F.; Figuerola, C.G.; Berrocal, J.L.A.; Rodriguez, E.: Reformulation of queries using similarity-thesauri (2005) 0.00
    0.0011642005 = product of:
      0.002328401 = sum of:
        0.002328401 = product of:
          0.006985203 = sum of:
            0.006985203 = weight(_text_:a in 1043) [ClassicSimilarity], result of:
              0.006985203 = score(doc=1043,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.13239266 = fieldWeight in 1043, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1043)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    One of the major problems in information retrieval is the formulation of queries on the part of the user. This entails specifying a set of words or terms that express their informational need. However, it is well-known that two people can assign different terms to refer to the same concepts. The techniques that attempt to reduce this problem as much as possible generally start from a first search, and then study how the initial query can be modified to obtain better results. In general, the construction of the new query involves expanding the terms of the initial query and recalculating the importance of each term in the expanded query. Depending on the technique used to formulate the new query several strategies are distinguished. These strategies are based on the idea that if two terms are similar (with respect to any criterion), the documents in which both terms appear frequently will also be related. The technique we used in this study is known as query expansion using similarity thesauri.
    Type
    a
  4. Stojanovic, N.: On the query refinement in the ontology-based searching for information (2005) 0.00
    0.0011202524 = product of:
      0.0022405048 = sum of:
        0.0022405048 = product of:
          0.0067215143 = sum of:
            0.0067215143 = weight(_text_:a in 2907) [ClassicSimilarity], result of:
              0.0067215143 = score(doc=2907,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.12739488 = fieldWeight in 2907, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2907)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    a
  5. Shiri, A.: Topic familiarity and its effects on term selection and browsing in a thesaurus-enhanced search environment (2005) 0.00
    0.0011202524 = product of:
      0.0022405048 = sum of:
        0.0022405048 = product of:
          0.0067215143 = sum of:
            0.0067215143 = weight(_text_:a in 613) [ClassicSimilarity], result of:
              0.0067215143 = score(doc=613,freq=8.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.12739488 = fieldWeight in 613, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=613)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To evaluate the extent to which familiarity with search topics affects the ways in which users select and browse search terms in a thesaurus-enhanced search setting. Design/methodology/approach - An experimental methodology was adopted to study users' search behaviour in an operational information retrieval environment. Findings - Topic familiarity and subject knowledge influence some search and interaction behaviours. Searches involving moderately and very familiar topics were associated with browsing around twice as many thesaurus terms as was the case for unfamiliar topics. Research limitations/implications - Some search behaviours such as thesaurus browsing and term selection could be used as an indication of user levels of topic familiarity. Practical implications - The results of this study provide design implications as to how to develop personalized search interfaces where users with varying levels of familiarity with search topics can carry out searches. Originality/value - This paper establishes the importance of topic familiarity characteristics and the effects of those characteristics on users' interaction with search interfaces enhanced with semantic tools such as thesauri.
    Type
    a
  6. Bai, J.; Nie, J.-Y.: Adapting information retrieval to query contexts (2008) 0.00
    0.0011202524 = product of:
      0.0022405048 = sum of:
        0.0022405048 = product of:
          0.0067215143 = sum of:
            0.0067215143 = weight(_text_:a in 2446) [ClassicSimilarity], result of:
              0.0067215143 = score(doc=2446,freq=8.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.12739488 = fieldWeight in 2446, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2446)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    In current IR approaches documents are retrieved only according to the terms specified in the query. The same answers are returned for the same query whatever the user and the search goal are. In reality, many other contextual factors strongly influence document's relevance and they should be taken into account in IR operations. This paper proposes a method, based on language modeling, to integrate several contextual factors so that document ranking will be adapted to the specific query contexts. We will consider three contextual factors in this paper: the topic domain of the query, the characteristics of the document collection, as well as context words within the query. Each contextual factor is used to generate a new query language model to specify some aspect of the information need. All these query models are then combined together to produce a more complete model for the underlying information need. Our experiments on TREC collections show that each contextual factor can positively influence the IR effectiveness and the combined model results in the highest effectiveness. This study shows that it is both beneficial and feasible to integrate more contextual factors in the current IR practice.
    Type
    a
  7. Gao, J.; Zhang, J.: Clustered SVD strategies in latent semantic indexing (2005) 0.00
    0.0011089934 = product of:
      0.0022179869 = sum of:
        0.0022179869 = product of:
          0.0066539603 = sum of:
            0.0066539603 = weight(_text_:a in 1166) [ClassicSimilarity], result of:
              0.0066539603 = score(doc=1166,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.12611452 = fieldWeight in 1166, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1166)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The text retrieval method using latent semantic indexing (LSI) technique with truncated singular value decomposition (SVD) has been intensively studied in recent years. The SVD reduces the noise contained in the original representation of the term-document matrix and improves the information retrieval accuracy. Recent studies indicate that SVD is mostly useful for small homogeneous data collections. For large inhomogeneous datasets, the performance of the SVD based text retrieval technique may deteriorate. We propose to partition a large inhomogeneous dataset into several smaller ones with clustered structure, on which we apply the truncated SVD. Our experimental results show that the clustered SVD strategies may enhance the retrieval accuracy and reduce the computing and storage costs.
    Type
    a
  8. Quiroga, L.M.; Mostafa, J.: ¬An experiment in building profiles in information filtering : the role of context of user relevance feedback (2002) 0.00
    9.701671E-4 = product of:
      0.0019403342 = sum of:
        0.0019403342 = product of:
          0.0058210026 = sum of:
            0.0058210026 = weight(_text_:a in 2579) [ClassicSimilarity], result of:
              0.0058210026 = score(doc=2579,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.11032722 = fieldWeight in 2579, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2579)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    An experiment was conducted to see how relevance feedback could be used to build and adjust profiles to improve the performance of filtering systems. Data was collected during the system interaction of 18 graduate students with SIFTER (Smart Information Filtering Technology for Electronic Resources), a filtering system that ranks incoming information based on users' profiles. The data set came from a collection of 6000 records concerning consumer health. In the first phase of the study, three different modes of profile acquisition were compared. The explicit mode allowed users to directly specify the profile; the implicit mode utilized relevance feedback to create and refine the profile; and the combined mode allowed users to initialize the profile and to continuously refine it using relevance feedback. Filtering performance, measured in terms of Normalized Precision, showed that the three approaches were significantly different ( [small alpha, Greek] =0.05 and p =0.012). The explicit mode of profile acquisition consistently produced superior results. Exclusive reliance on relevance feedback in the implicit mode resulted in inferior performance. The low performance obtained by the implicit acquisition mode motivated the second phase of the study, which aimed to clarify the role of context in relevance feedback judgments. An inductive content analysis of thinking aloud protocols showed dimensions that were highly situational, establishing the importance context plays in feedback relevance assessments. Results suggest the need for better representation of documents, profiles, and relevance feedback mechanisms that incorporate dimensions identified in this research.
    Type
    a
  9. Lehtokangas, R.; Järvelin, K.: Consistency of textual expression in newspaper articles : an argument for semantically based query expansion (2001) 0.00
    9.701671E-4 = product of:
      0.0019403342 = sum of:
        0.0019403342 = product of:
          0.0058210026 = sum of:
            0.0058210026 = weight(_text_:a in 4485) [ClassicSimilarity], result of:
              0.0058210026 = score(doc=4485,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.11032722 = fieldWeight in 4485, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4485)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This article investigates how consistent different newspapers are in their choice of words when writing about the same news events. News articles on the same news events were taken from three Finnish newspapers and compared in regard to their central concepts and words representing the concepts in the news texts. Consistency figures were calculated for each set of three articles (the total number of sets was sixty). Inconsistency in words and concepts was found between news articles from different newspapers. The mean value of consistency calculated on the basis of words was 65 per cent; this however depended on the article length. For short news wires consistency was 83 per cent while for long articles it was only 47 per cent. At the concept level, consistency was considerably higher, ranging from 92 per cent to 97 per cent between short and long articles. The articles also represented three categories of topic (event, process and opinion). Statistically significant differences in consistency were found in regard to length but not in regard to the categories of topic. We argue that the expression inconsistency is a clear sign of a retrieval problem and that query expansion based on semantic relationships can significantly improve retrieval performance on free-text sources.
    Type
    a
  10. Niemi, T.; Jämsen, J.: ¬A query language for discovering semantic associations, part II : sample queries and query evaluation (2007) 0.00
    9.701671E-4 = product of:
      0.0019403342 = sum of:
        0.0019403342 = product of:
          0.0058210026 = sum of:
            0.0058210026 = weight(_text_:a in 580) [ClassicSimilarity], result of:
              0.0058210026 = score(doc=580,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.11032722 = fieldWeight in 580, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=580)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    In our query language introduced in Part I (Journal of the American Society for Information Science and Technology. 58(2007) no.11, S.1559-1568) the user can formulate queries to find out (possibly complex) semantic relationships among entities. In this article we demonstrate the usage of our query language and discuss the new applications that it supports. We categorize several query types and give sample queries. The query types are categorized based on whether the entities specified in a query are known or unknown to the user in advance, and whether text information in documents is utilized. Natural language is used to represent the results of queries in order to facilitate correct interpretation by the user. We discuss briefly the issues related to the prototype implementation of the query language and show that an independent operation like Rho (Sheth et al., 2005; Anyanwu & Sheth, 2002, 2003), which presupposes entities of interest to be known in advance, is exceedingly inefficient in emulating the behavior of our query language. The discussion also covers potential problems, and challenges for future work.
    Type
    a
  11. Bayer, O.; Höhfeld, S.; Josbächer, F.; Kimm, N.; Kradepohl, I.; Kwiatkowski, M.; Puschmann, C.; Sabbagh, M.; Werner, N.; Vollmer, U.: Evaluation of an ontology-based knowledge-management-system : a case study of Convera RetrievalWare 8.0 (2005) 0.00
    9.701671E-4 = product of:
      0.0019403342 = sum of:
        0.0019403342 = product of:
          0.0058210026 = sum of:
            0.0058210026 = weight(_text_:a in 624) [ClassicSimilarity], result of:
              0.0058210026 = score(doc=624,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.11032722 = fieldWeight in 624, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=624)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    With RetrievalWare 8.0(TM) the American company Convera offers an elaborated software in the range of Information Retrieval, Information Indexing and Knowledge Management. Convera promises the possibility of handling different file formats in many different languages. Regarding comparable products one innovation is to be stressed particularly: the possibility of the preparation as well as integration of an ontology. One tool of the software package is useful in order to produce ontologies manually, to process existing ontologies and to import the very. The processing of search results is also to be mentioned. By means of categorization strategies search results can be classified dynamically and presented in personalized representations. This study presents an evaluation of the functions and components of the system. Technological aspects and modes of operation under the surface of Convera RetrievalWare will be analysed, with a focus on the creation of libraries and thesauri, and the problems posed by the integration of an existing thesaurus. Broader aspects such as usability and system ergonomics are integrated in the examination as well.
    Type
    a
  12. Baofu, P.: ¬The future of information architecture : conceiving a better way to understand taxonomy, network, and intelligence (2008) 0.00
    9.701671E-4 = product of:
      0.0019403342 = sum of:
        0.0019403342 = product of:
          0.0058210026 = sum of:
            0.0058210026 = weight(_text_:a in 2257) [ClassicSimilarity], result of:
              0.0058210026 = score(doc=2257,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.11032722 = fieldWeight in 2257, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2257)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The Future of Information Architecture examines issues surrounding why information is processed, stored and applied in the way that it has, since time immemorial. Contrary to the conventional wisdom held by many scholars in human history, the recurrent debate on the explanation of the most basic categories of information (eg space, time causation, quality, quantity) has been misconstrued, to the effect that there exists some deeper categories and principles behind these categories of information - with enormous implications for our understanding of reality in general. To understand this, the book is organised in to four main parts: Part I begins with the vital question concerning the role of information within the context of the larger theoretical debate in the literature. Part II provides a critical examination of the nature of data taxonomy from the main perspectives of culture, society, nature and the mind. Part III constructively invesitgates the world of information network from the main perspectives of culture, society, nature and the mind. Part IV proposes six main theses in the authors synthetic theory of information architecture, namely, (a) the first thesis on the simpleness-complicatedness principle, (b) the second thesis on the exactness-vagueness principle (c) the third thesis on the slowness-quickness principle (d) the fourth thesis on the order-chaos principle, (e) the fifth thesis on the symmetry-asymmetry principle, and (f) the sixth thesis on the post-human stage.
  13. Shiri, A.A.; Revie, C.: End-user interaction with thesauri : an evaluation of cognitive overlap in search term selection (2004) 0.00
    9.5056574E-4 = product of:
      0.0019011315 = sum of:
        0.0019011315 = product of:
          0.0057033943 = sum of:
            0.0057033943 = weight(_text_:a in 2658) [ClassicSimilarity], result of:
              0.0057033943 = score(doc=2658,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.10809815 = fieldWeight in 2658, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2658)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The use of thesaurus-enhanced search tools is an the increase. This paper provides an insight into end-users interaction with and perceptions of such tools. In particular the overlap between users' initial query formulation and thesaurus structures is investigated. This investigation involved the performance of genuine search tasks an the CAB Abstracts database by academic users in the domain of veterinary medicine. The perception of these users regarding the nature and usefulness of the terms suggested from the thesaurus during the search interaction is reported. The results indicated that around 80% of terms entered were matched either exactly or partially to thesaurus terms. Users found over 90% of the terms suggested to be close to their search topics and where terms were selected they indicated that around 50% were to support a 'narrowing down' activity. These findings have implications for the design of thesaurus-enhanced interfaces.
    Type
    a
  14. Bilal, D.; Kirby, J.: Differences and similarities in information seeking : children and adults as Web users (2002) 0.00
    8.9620193E-4 = product of:
      0.0017924039 = sum of:
        0.0017924039 = product of:
          0.0053772116 = sum of:
            0.0053772116 = weight(_text_:a in 2591) [ClassicSimilarity], result of:
              0.0053772116 = score(doc=2591,freq=8.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.10191591 = fieldWeight in 2591, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2591)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This study examined the success and information seeking behaviors of seventh-grade science students and graduate students in information science in using Yahooligans! Web search engine/directory. It investigated these users' cognitive, affective, and physical behaviors as they sought the answer for a fact-finding task. It analyzed and compared the overall patterns of children's and graduate students' Web activities, including searching moves, browsing moves, backtracking moves, looping moves, screen scrolling, target location and deviation moves, and the time they took to complete the task. The authors applied Bilal's Web Traversal Measure to quantify these users' effectiveness, efficiency, and quality of moves they made. Results were based on 14 children's Web sessions and nine graduate students' sessions. Both groups' Web activities were captured online using Lotus ScreenCam, a software package that records and replays online activities in Web browsers. Children's affective states were captured via exit interviews. Graduate students' affective states were extracted from the journal writings they kept during the traversal process. The study findings reveal that 89% of the graduate students found the correct answer to the search task as opposed to 50% of the children. Based on the Measure, graduate students' weighted effectiveness, efficiency, and quality of the Web moves they made were much higher than those of the children. Regardless of success and weighted scores, however, similarities and differences in information seeking were found between the two groups. Yahooligans! poor structure of keyword searching was a major factor that contributed to the "breakdowns" children and graduate students experienced. Unlike children, graduate students were able to recover from "breakdowns" quickly and effectively. Three main factors influenced these users' performance: ability to recover from "breakdowns", navigational style, and focus on task. Children and graduate students made recommendations for improving Yahooligans! interface design. Implications for Web user training and system design improvements are made.
    Type
    a
  15. Greenberg, J.: Optimal query expansion (QE) processing methods with semantically encoded structured thesaurus terminology (2001) 0.00
    6.721515E-4 = product of:
      0.001344303 = sum of:
        0.001344303 = product of:
          0.004032909 = sum of:
            0.004032909 = weight(_text_:a in 5750) [ClassicSimilarity], result of:
              0.004032909 = score(doc=5750,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.07643694 = fieldWeight in 5750, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5750)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    a
  16. red: Alles Wissen gleich einer großen Stadt (2002) 0.00
    6.721515E-4 = product of:
      0.001344303 = sum of:
        0.001344303 = product of:
          0.004032909 = sum of:
            0.004032909 = weight(_text_:a in 1484) [ClassicSimilarity], result of:
              0.004032909 = score(doc=1484,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.07643694 = fieldWeight in 1484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1484)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    a
  17. Hauer, M.: Neue OPACs braucht das Land ... dandelon.com (2006) 0.00
    6.721515E-4 = product of:
      0.001344303 = sum of:
        0.001344303 = product of:
          0.004032909 = sum of:
            0.004032909 = weight(_text_:a in 6047) [ClassicSimilarity], result of:
              0.004032909 = score(doc=6047,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.07643694 = fieldWeight in 6047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6047)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    a
  18. Schek, M.: Automatische Klassifizierung in Erschließung und Recherche eines Pressearchivs (2006) 0.00
    4.4810097E-4 = product of:
      8.9620193E-4 = sum of:
        8.9620193E-4 = product of:
          0.0026886058 = sum of:
            0.0026886058 = weight(_text_:a in 6043) [ClassicSimilarity], result of:
              0.0026886058 = score(doc=6043,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.050957955 = fieldWeight in 6043, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6043)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    a

Languages

  • e 76
  • d 22

Types

  • a 93
  • el 8
  • m 4
  • s 1
  • More… Less…