Search (3079 results, page 3 of 154)

  • × type_ss:"a"
  1. Seki, K.; Uehara, K.: Opinionated document retrieval using subjective triggers (2011) 0.03
    0.034903605 = product of:
      0.17451802 = sum of:
        0.17451802 = weight(_text_:grams in 4446) [ClassicSimilarity], result of:
          0.17451802 = score(doc=4446,freq=2.0), product of:
            0.39198354 = queryWeight, product of:
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.04863741 = queryNorm
            0.44521773 = fieldWeight in 4446, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4446)
      0.2 = coord(1/5)
    
    Abstract
    This article proposes a novel application of a statistical language model to opinionated document retrieval targeting weblogs (blogs). In particular, we explore the use of the trigger model-originally developed for incorporating distant word dependencies-in order to model the characteristics of personal opinions that cannot be properly modeled by standard n-grams. Our primary assumption is that there are two constituents to form a subjective opinion. One is the subject of the opinion or the object that the opinion is about, and the other is a subjective expression; the former is regarded as a triggering word and the latter as a triggered word. We automatically identify those subjective trigger patterns to build a language model from a corpus of product customer reviews. Experimental results on the Text Retrieval Conference Blog track test collections show that, when used for reranking initial search results, our proposed model significantly improves opinionated document retrieval. In addition, we report on an experiment on dynamic adaptation of the model to a given query, which is found effective for most of the difficult queries categorized under politics and organizations. We also demonstrate that, without any modification to the proposed model itself, it can be effectively applied to polarized opinion retrieval.
  2. Ortiz-Cordova, A.; Yang, Y.; Jansen, B.J.: External to internal search : associating searching on search engines with searching on sites (2015) 0.03
    0.034903605 = product of:
      0.17451802 = sum of:
        0.17451802 = weight(_text_:grams in 2675) [ClassicSimilarity], result of:
          0.17451802 = score(doc=2675,freq=2.0), product of:
            0.39198354 = queryWeight, product of:
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.04863741 = queryNorm
            0.44521773 = fieldWeight in 2675, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2675)
      0.2 = coord(1/5)
    
    Abstract
    We analyze the transitions from external search, searching on web search engines, to internal search, searching on websites. We categorize 295,571 search episodes composed of a query submitted to web search engines and the subsequent queries submitted to a single website search by the same users. There are a total of 1,136,390 queries from all searches, of which 295,571 are external search queries and 840,819 are internal search queries. We algorithmically classify queries into states and then use n-grams to categorize search patterns. We cluster the searching episodes into major patterns and identify the most commonly occurring, which are: (1) Explorers (43% of all patterns) with a broad external search query and then broad internal search queries, (2) Navigators (15%) with an external search query containing a URL component and then specific internal search queries, and (3) Shifters (15%) with a different, seemingly unrelated, query types when transitioning from external to internal search. The implications of this research are that external search and internal search sessions are part of a single search episode and that online businesses can leverage these search episodes to more effectively target potential customers.
  3. Gencosman, B.C.; Ozmutlu, H.C.; Ozmutlu, S.: Character n-gram application for automatic new topic identification (2014) 0.03
    0.034903605 = product of:
      0.17451802 = sum of:
        0.17451802 = weight(_text_:grams in 2688) [ClassicSimilarity], result of:
          0.17451802 = score(doc=2688,freq=2.0), product of:
            0.39198354 = queryWeight, product of:
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.04863741 = queryNorm
            0.44521773 = fieldWeight in 2688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2688)
      0.2 = coord(1/5)
    
    Object
    n-grams
  4. Roy, R.S.; Agarwal, S.; Ganguly, N.; Choudhury, M.: Syntactic complexity of Web search queries through the lenses of language models, networks and users (2016) 0.03
    0.034903605 = product of:
      0.17451802 = sum of:
        0.17451802 = weight(_text_:grams in 3188) [ClassicSimilarity], result of:
          0.17451802 = score(doc=3188,freq=2.0), product of:
            0.39198354 = queryWeight, product of:
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.04863741 = queryNorm
            0.44521773 = fieldWeight in 3188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3188)
      0.2 = coord(1/5)
    
    Abstract
    Across the world, millions of users interact with search engines every day to satisfy their information needs. As the Web grows bigger over time, such information needs, manifested through user search queries, also become more complex. However, there has been no systematic study that quantifies the structural complexity of Web search queries. In this research, we make an attempt towards understanding and characterizing the syntactic complexity of search queries using a multi-pronged approach. We use traditional statistical language modeling techniques to quantify and compare the perplexity of queries with natural language (NL). We then use complex network analysis for a comparative analysis of the topological properties of queries issued by real Web users and those generated by statistical models. Finally, we conduct experiments to study whether search engine users are able to identify real queries, when presented along with model-generated ones. The three complementary studies show that the syntactic structure of Web queries is more complex than what n-grams can capture, but simpler than NL. Queries, thus, seem to represent an intermediate stage between syntactic and non-syntactic communication.
  5. Lhadj, L.S.; Boughanem, M.; Amrouche, K.: Enhancing information retrieval through concept-based language modeling and semantic smoothing (2016) 0.03
    0.034903605 = product of:
      0.17451802 = sum of:
        0.17451802 = weight(_text_:grams in 3221) [ClassicSimilarity], result of:
          0.17451802 = score(doc=3221,freq=2.0), product of:
            0.39198354 = queryWeight, product of:
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.04863741 = queryNorm
            0.44521773 = fieldWeight in 3221, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3221)
      0.2 = coord(1/5)
    
    Abstract
    Traditionally, many information retrieval models assume that terms occur in documents independently. Although these models have already shown good performance, the word independency assumption seems to be unrealistic from a natural language point of view, which considers that terms are related to each other. Therefore, such an assumption leads to two well-known problems in information retrieval (IR), namely, polysemy, or term mismatch, and synonymy. In language models, these issues have been addressed by considering dependencies such as bigrams, phrasal-concepts, or word relationships, but such models are estimated using simple n-grams or concept counting. In this paper, we address polysemy and synonymy mismatch with a concept-based language modeling approach that combines ontological concepts from external resources with frequently found collocations from the document collection. In addition, the concept-based model is enriched with subconcepts and semantic relationships through a semantic smoothing technique so as to perform semantic matching. Experiments carried out on TREC collections show that our model achieves significant improvements over a single word-based model and the Markov Random Field model (using a Markov classifier).
  6. Ferro, N.; Silvello, G.: Toward an anatomy of IR system component performances (2018) 0.03
    0.034903605 = product of:
      0.17451802 = sum of:
        0.17451802 = weight(_text_:grams in 4035) [ClassicSimilarity], result of:
          0.17451802 = score(doc=4035,freq=2.0), product of:
            0.39198354 = queryWeight, product of:
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.04863741 = queryNorm
            0.44521773 = fieldWeight in 4035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4035)
      0.2 = coord(1/5)
    
    Abstract
    Information retrieval (IR) systems are the prominent means for searching and accessing huge amounts of unstructured information on the web and elsewhere. They are complex systems, constituted by many different components interacting together, and evaluation is crucial to both tune and improve them. Nevertheless, in the current evaluation methodology, there is still no way to determine how much each component contributes to the overall performances and how the components interact together. This hampers the possibility of a deep understanding of IR system behavior and, in turn, prevents us from designing ahead which components are best suited to work together for a specific search task. In this paper, we move the evaluation methodology one step forward by overcoming these barriers and beginning to devise an "anatomy" of IR systems and their internals. In particular, we propose a methodology based on the General Linear Mixed Model (GLMM) and analysis of variance (ANOVA) to develop statistical models able to isolate system variance and component effects as well as their interaction, by relying on a grid of points (GoP) containing all the combinations of the analyzed components. We apply the proposed methodology to the analysis of two relevant search tasks-news search and web search-by using standard TREC collections. We analyze the basic set of components typically part of an IR system, namely, stop lists, stemmers, and n-grams, and IR models. In this way, we derive insights about English text retrieval.
  7. Juola, P.; Mikros, G.K.; Vinsick, S.: ¬A comparative assessment of the difficulty of authorship attribution in Greek and in English (2019) 0.03
    0.034903605 = product of:
      0.17451802 = sum of:
        0.17451802 = weight(_text_:grams in 4676) [ClassicSimilarity], result of:
          0.17451802 = score(doc=4676,freq=2.0), product of:
            0.39198354 = queryWeight, product of:
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.04863741 = queryNorm
            0.44521773 = fieldWeight in 4676, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4676)
      0.2 = coord(1/5)
    
    Abstract
    Authorship attribution is an important problem in text classification, with many applications and a substantial body of research activity. Among the research findings are that many different methods will work, including a number of methods that are superficially language-independent (such as an analysis of the most common "words" or "character n-grams" in a document). Since all languages have words (and all written languages have characters), this method could (in theory) work on any language. However, it is not clear that the methods that work best on, for example English, would also work best on other languages. It is not even clear that the same level of performance is achievable in different languages, even under identical conditions. Unfortunately, it is very difficult to achieve "identical conditions" in practice. A new corpus, developed by George Mikros, provides very tight controls not only for author but also for topic, thus enabling a direct comparison of performance levels between the two languages Greek and English. We compare a number of different methods head-to-head on this corpus, and show that, overall, performance on English is higher than performance on Greek, often highly significantly so.
  8. Agarwal, B.; Ramampiaro, H.; Langseth, H.; Ruocco, M.: ¬A deep network model for paraphrase detection in short text messages (2018) 0.03
    0.034903605 = product of:
      0.17451802 = sum of:
        0.17451802 = weight(_text_:grams in 5043) [ClassicSimilarity], result of:
          0.17451802 = score(doc=5043,freq=2.0), product of:
            0.39198354 = queryWeight, product of:
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.04863741 = queryNorm
            0.44521773 = fieldWeight in 5043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.059301 = idf(docFreq=37, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5043)
      0.2 = coord(1/5)
    
    Abstract
    This paper is concerned with paraphrase detection, i.e., identifying sentences that are semantically identical. The ability to detect similar sentences written in natural language is crucial for several applications, such as text mining, text summarization, plagiarism detection, authorship authentication and question answering. Recognizing this importance, we study in particular how to address the challenges with detecting paraphrases in user generated short texts, such as Twitter, which often contain language irregularity and noise, and do not necessarily contain as much semantic information as longer clean texts. We propose a novel deep neural network-based approach that relies on coarse-grained sentence modelling using a convolutional neural network (CNN) and a recurrent neural network (RNN) model, combined with a specific fine-grained word-level similarity matching model. More specifically, we develop a new architecture, called DeepParaphrase, which enables to create an informative semantic representation of each sentence by (1) using CNN to extract the local region information in form of important n-grams from the sentence, and (2) applying RNN to capture the long-term dependency information. In addition, we perform a comparative study on state-of-the-art approaches within paraphrase detection. An important insight from this study is that existing paraphrase approaches perform well when applied on clean texts, but they do not necessarily deliver good performance against noisy texts, and vice versa. In contrast, our evaluation has shown that the proposed DeepParaphrase-based approach achieves good results in both types of texts, thus making it more robust and generic than the existing approaches.
  9. Zerbst, H.-J.; Kaptein, O.: Gegenwärtiger Stand und Entwicklungstendenzen der Sacherschließung : Auswertung einer Umfrage an deutschen wissenschaftlichen und Öffentlichen Bibliotheken (1993) 0.03
    0.030899638 = product of:
      0.15449819 = sum of:
        0.15449819 = weight(_text_:3a in 7394) [ClassicSimilarity], result of:
          0.15449819 = score(doc=7394,freq=2.0), product of:
            0.41234848 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04863741 = queryNorm
            0.3746787 = fieldWeight in 7394, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=7394)
      0.2 = coord(1/5)
    
    Abstract
    Ergebnis einer Umfrage aus dem Frühjahr 1993. A. Wissenschaftliche Bibliotheken: Versandt wurde der Fragebogen an die Mitglieder der Sektion IV des DBV. Fragen: (1a) Um welchen Bestand handelt es sich, der sachlich erschlossen wird? (1b) Wie groß ist dieser Bestand? (1c) Wird dieser Bestand vollständig oder nur in Auswahl (einzelne Fächer, Lehrbücher, Dissertationen o.ä.) sachlich erschlossen? (1d) Seit wann bestehen die jetzigen Sachkataloge? (2) Auf welche Art wird der Bestand zur Zeit sachlich erschlossen? (3a) Welche Klassifikation wird angewendet? (3b) Gibt es alphabetisches SyK-Register bzw. einen Zugriff auf die Klassenbeschreibungen? (3c) Gibt es ergänzende Schlüssel für die Aspekte Ort, Zeit, Form? (4) Falls Sie einen SWK führen (a) nach welchem Regelwerk? (b) Gibt es ein genormtes Vokabular oder einen Thesaurus (ggf. nur für bestimmte Fächer)? (5) In welcher Form existieren die Sachkataloge? (6) Ist die Bibliothek an einer kooperativen Sacherschließung, z.B. in einem Verbund beteiligt? [Nein: 79%] (7) Nutzen Sie Fremdleistungen bei der Sacherschließung? [Ja: 46%] (8) Welche sachlichen Suchmöglichkeiten gibt es für Benutzer? (9) Sind zukünftige Veränderungen bei der Sacherschließung geplant? [Ja: 73%]. - B. Öffentliche Bibliotheken: Die Umfrage richtete sich an alle ÖBs der Sektionen I, II und III des DBV. Fragen: (1) Welche Sachkataloge führen Sie? (2) Welche Klassifikationen (Systematiken) liegen dem SyK zugrunde? [ASB: 242; KAB: 333; SfB: 4 (???); SSD: 11; Berliner: 18] (3) Führen Sie ein eigenes Schlagwort-Register zum SyK bzw. zur Klassifikation (Systematik)? (4) Führen Sie den SWK nach ...? [RSWK: 132 (= ca. 60%) anderen Regeln: 93] (5) Seit wann bestehen die jetzigen Sachkataloge? (6) In welcher Form existiern die Sachkataloge? (7) In welchem Umfang wird der Bestand erschlossen? (8) Welche Signaturen verwenden Sie? (9) Ist die Bibliothek an einer kooperativen Sacherschließung, z.B. einem Verbund, beteiligt? [Nein: 96%] (10) Nutzen Sie Fremdleistungen bei der Sacherschließung? [Ja: 70%] (11) Woher beziehen Sie diese Fremdleistungen? (12) Verfügen Sie über ein Online-Katalogsystem mit OPAC? [Ja: 78; Nein: 614] (13) Sind zukünftig Veränderungen bei der Sacherschließung geplant? [Nein: 458; Ja: 237]; RESÜMEE für ÖB: "(i) Einführung von EDV-Katalogen bleibt auch in den 90er Jahren ein Thema, (ii) Der Aufbau von SWK wird in vielen Bibliotheken in Angriff genommen, dabei spielt die Fremddatenübernahme eine entscheidende Rolle, (iii) RSWK werden zunehmend angewandt, Nutzung der SWD auch für andere Regeln wirkt normierend, (iv) Große Bewegung auf dem 'Systematik-Markt' ist in absehbarer Zeit nicht zu erwarten, (v) Für kleinere Bibliotheken wird der Zettelkatalog auf absehbare Zeit noch die herrschende Katalogform sein, (vi) Der erhebliche Nachholbedarf in den neuen Bundesländern wird nur in einem größeren Zeitraum zu leisten sein. ??? SPEZIALBIBIOTHEKEN ???
  10. Jascó, P.: Searching for images by similarity online (1998) 0.03
    0.029821565 = product of:
      0.14910783 = sum of:
        0.14910783 = weight(_text_:22 in 393) [ClassicSimilarity], result of:
          0.14910783 = score(doc=393,freq=4.0), product of:
            0.17031991 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04863741 = queryNorm
            0.8754574 = fieldWeight in 393, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=393)
      0.2 = coord(1/5)
    
    Date
    29.11.2004 13:03:22
    Source
    Online. 22(1998) no.6, S.99-102
  11. Rübesame, O.: Probleme des geographischen Schlüssels (1963) 0.03
    0.029821565 = product of:
      0.14910783 = sum of:
        0.14910783 = weight(_text_:22 in 134) [ClassicSimilarity], result of:
          0.14910783 = score(doc=134,freq=4.0), product of:
            0.17031991 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04863741 = queryNorm
            0.8754574 = fieldWeight in 134, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=134)
      0.2 = coord(1/5)
    
    Date
    17. 1.1999 13:22:22
  12. Lutz, H.: Back to business : was CompuServe Unternehmen bietet (1997) 0.03
    0.029821565 = product of:
      0.14910783 = sum of:
        0.14910783 = weight(_text_:22 in 6569) [ClassicSimilarity], result of:
          0.14910783 = score(doc=6569,freq=4.0), product of:
            0.17031991 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04863741 = queryNorm
            0.8754574 = fieldWeight in 6569, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=6569)
      0.2 = coord(1/5)
    
    Date
    22. 2.1997 19:50:29
    Source
    Cogito. 1997, H.1, S.22-23
  13. Klauß, H.: SISIS : 10. Anwenderforum Berlin-Brandenburg (1999) 0.03
    0.029821565 = product of:
      0.14910783 = sum of:
        0.14910783 = weight(_text_:22 in 463) [ClassicSimilarity], result of:
          0.14910783 = score(doc=463,freq=4.0), product of:
            0.17031991 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04863741 = queryNorm
            0.8754574 = fieldWeight in 463, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=463)
      0.2 = coord(1/5)
    
    Date
    22. 2.1999 10:22:52
  14. fwt: Wie das Gehirn Bilder 'liest' (1999) 0.03
    0.029821565 = product of:
      0.14910783 = sum of:
        0.14910783 = weight(_text_:22 in 4042) [ClassicSimilarity], result of:
          0.14910783 = score(doc=4042,freq=4.0), product of:
            0.17031991 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04863741 = queryNorm
            0.8754574 = fieldWeight in 4042, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=4042)
      0.2 = coord(1/5)
    
    Date
    22. 7.2000 19:01:22
  15. Winterhoff-Spurk, P.: Auf dem Weg in die mediale Klassengesellschaft : Psychologische Beiträge zur Wissenskluft-Forschung (1999) 0.03
    0.029821565 = product of:
      0.14910783 = sum of:
        0.14910783 = weight(_text_:22 in 4130) [ClassicSimilarity], result of:
          0.14910783 = score(doc=4130,freq=4.0), product of:
            0.17031991 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04863741 = queryNorm
            0.8754574 = fieldWeight in 4130, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=4130)
      0.2 = coord(1/5)
    
    Date
    8.11.1999 19:22:39
    Source
    Medien praktisch. 1999, H.3, S.17-22
  16. RAK-NBM : Interpretationshilfe zu NBM 3b,3 (2000) 0.03
    0.029821565 = product of:
      0.14910783 = sum of:
        0.14910783 = weight(_text_:22 in 4362) [ClassicSimilarity], result of:
          0.14910783 = score(doc=4362,freq=4.0), product of:
            0.17031991 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04863741 = queryNorm
            0.8754574 = fieldWeight in 4362, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=4362)
      0.2 = coord(1/5)
    
    Date
    22. 1.2000 19:22:27
  17. Diederichs, A.: Wissensmanagement ist Macht : Effektiv und kostenbewußt arbeiten im Informationszeitalter (2005) 0.03
    0.029821565 = product of:
      0.14910783 = sum of:
        0.14910783 = weight(_text_:22 in 3211) [ClassicSimilarity], result of:
          0.14910783 = score(doc=3211,freq=4.0), product of:
            0.17031991 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04863741 = queryNorm
            0.8754574 = fieldWeight in 3211, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=3211)
      0.2 = coord(1/5)
    
    Date
    22. 2.2005 9:16:22
  18. Hawking, D.; Robertson, S.: On collection size and retrieval effectiveness (2003) 0.03
    0.029821565 = product of:
      0.14910783 = sum of:
        0.14910783 = weight(_text_:22 in 4109) [ClassicSimilarity], result of:
          0.14910783 = score(doc=4109,freq=4.0), product of:
            0.17031991 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04863741 = queryNorm
            0.8754574 = fieldWeight in 4109, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=4109)
      0.2 = coord(1/5)
    
    Date
    14. 8.2005 14:22:22
  19. Buzydlowski, J.W.; White, H.D.; Lin, X.: Term Co-occurrence Analysis as an Interface for Digital Libraries (2002) 0.03
    0.027392859 = product of:
      0.13696429 = sum of:
        0.13696429 = weight(_text_:22 in 1339) [ClassicSimilarity], result of:
          0.13696429 = score(doc=1339,freq=6.0), product of:
            0.17031991 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04863741 = queryNorm
            0.804159 = fieldWeight in 1339, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=1339)
      0.2 = coord(1/5)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:16:22
  20. Fischer, W.L.: Toleranzräume (1970) 0.03
    0.026358789 = product of:
      0.13179395 = sum of:
        0.13179395 = weight(_text_:22 in 192) [ClassicSimilarity], result of:
          0.13179395 = score(doc=192,freq=2.0), product of:
            0.17031991 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04863741 = queryNorm
            0.77380234 = fieldWeight in 192, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.15625 = fieldNorm(doc=192)
      0.2 = coord(1/5)
    
    Source
    Archimedes. 22(1970) H.4, S.101-107

Languages

Types

  • el 73
  • b 34
  • p 1
  • More… Less…

Themes