Search (179 results, page 1 of 9)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.27
    0.2722137 = product of:
      0.47637397 = sum of:
        0.065653205 = product of:
          0.1969596 = sum of:
            0.1969596 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.1969596 = score(doc=562,freq=2.0), product of:
                0.35045066 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041336425 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.1969596 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1969596 = score(doc=562,freq=2.0), product of:
            0.35045066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041336425 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.1969596 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1969596 = score(doc=562,freq=2.0), product of:
            0.35045066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041336425 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.016801544 = product of:
          0.033603087 = sum of:
            0.033603087 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.033603087 = score(doc=562,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.5714286 = coord(4/7)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.20
    0.19695961 = product of:
      0.45957243 = sum of:
        0.065653205 = product of:
          0.1969596 = sum of:
            0.1969596 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.1969596 = score(doc=862,freq=2.0), product of:
                0.35045066 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041336425 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.1969596 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1969596 = score(doc=862,freq=2.0), product of:
            0.35045066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041336425 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.1969596 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1969596 = score(doc=862,freq=2.0), product of:
            0.35045066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041336425 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.42857143 = coord(3/7)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.18
    0.17602317 = product of:
      0.41072074 = sum of:
        0.1969596 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.1969596 = score(doc=563,freq=2.0), product of:
            0.35045066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041336425 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.1969596 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.1969596 = score(doc=563,freq=2.0), product of:
            0.35045066 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041336425 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.016801544 = product of:
          0.033603087 = sum of:
            0.033603087 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.033603087 = score(doc=563,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Haas, S.W.: ¬A feasibility study of the case hierarchy model for the construction and porting of natural language interfaces (1990) 0.06
    0.056064446 = product of:
      0.19622555 = sum of:
        0.07264129 = weight(_text_:management in 8071) [ClassicSimilarity], result of:
          0.07264129 = score(doc=8071,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.521365 = fieldWeight in 8071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.109375 = fieldNorm(doc=8071)
        0.12358427 = weight(_text_:case in 8071) [ClassicSimilarity], result of:
          0.12358427 = score(doc=8071,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.68003565 = fieldWeight in 8071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.109375 = fieldNorm(doc=8071)
      0.2857143 = coord(2/7)
    
    Source
    Information processing and management. 26(1990), S.615-628
  5. Fox, B.; Fox, C.J.: Efficient stemmer generation (2002) 0.03
    0.03203683 = product of:
      0.1121289 = sum of:
        0.04150931 = weight(_text_:management in 2585) [ClassicSimilarity], result of:
          0.04150931 = score(doc=2585,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.29792285 = fieldWeight in 2585, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=2585)
        0.07061958 = weight(_text_:case in 2585) [ClassicSimilarity], result of:
          0.07061958 = score(doc=2585,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.3885918 = fieldWeight in 2585, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0625 = fieldNorm(doc=2585)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper presents an algorithm for generating stemmers from text stemmer specification files. A small study shows that the generated stemmers are computationally efficient, often running faster than stemmers custom written to implement particular stemming algorithms. The stemmer specification files are easily written and modified by non-programmers, making it much easier to create a stemmer, or tune a stemmer's performance, than would be the case with a custom stemmer program. Stemmer generation is thus also human-resource efficient.
    Source
    Information processing and management. 38(2002) no.4, S.547-558
  6. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.03
    0.02970186 = product of:
      0.06930434 = sum of:
        0.025682576 = weight(_text_:management in 5089) [ClassicSimilarity], result of:
          0.025682576 = score(doc=5089,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.18433036 = fieldWeight in 5089, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.030896068 = weight(_text_:case in 5089) [ClassicSimilarity], result of:
          0.030896068 = score(doc=5089,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.17000891 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.012725695 = product of:
          0.02545139 = sum of:
            0.02545139 = weight(_text_:studies in 5089) [ClassicSimilarity], result of:
              0.02545139 = score(doc=5089,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.15430352 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    The 8th International Conference on Conceptual Structures - Logical, Linguistic, and Computational Issues (ICCS 2000) brings together a wide range of researchers and practitioners working with conceptual structures. During the last few years, the ICCS conference series has considerably widened its scope on different kinds of conceptual structures, stimulating research across domain boundaries. We hope that this stimulation is further enhanced by ICCS 2000 joining the long tradition of conferences in Darmstadt with extensive, lively discussions. This volume consists of contributions presented at ICCS 2000, complementing the volume "Conceptual Structures: Logical, Linguistic, and Computational Issues" (B. Ganter, G.W. Mineau (Eds.), LNAI 1867, Springer, Berlin-Heidelberg 2000). It contains submissions reviewed by the program committee, and position papers. We wish to express our appreciation to all the authors of submitted papers, to the general chair, the program chair, the editorial board, the program committee, and to the additional reviewers for making ICCS 2000 a valuable contribution in the knowledge processing research field. Special thanks go to the local organizers for making the conference an enjoyable and inspiring event. We are grateful to Darmstadt University of Technology, the Ernst Schröder Center for Conceptual Knowledge Processing, the Center for Interdisciplinary Studies in Technology, the Deutsche Forschungsgemeinschaft, Land Hessen, and NaviCon GmbH for their generous support
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
  7. Kishida, K.: Term disambiguation techniques based on target document collection for cross-language information retrieval : an empirical comparison of performance between techniques (2007) 0.03
    0.025246514 = product of:
      0.0883628 = sum of:
        0.025943318 = weight(_text_:management in 897) [ClassicSimilarity], result of:
          0.025943318 = score(doc=897,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.18620178 = fieldWeight in 897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=897)
        0.062419478 = weight(_text_:case in 897) [ClassicSimilarity], result of:
          0.062419478 = score(doc=897,freq=4.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.34346986 = fieldWeight in 897, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=897)
      0.2857143 = coord(2/7)
    
    Abstract
    Dictionary-based query translation for cross-language information retrieval often yields various translation candidates having different meanings for a source term in the query. This paper examines methods for solving the ambiguity of translations based on only the target document collections. First, we discuss two kinds of disambiguation technique: (1) one is a method using term co-occurrence statistics in the collection, and (2) a technique based on pseudo-relevance feedback. Next, these techniques are empirically compared using the CLEF 2003 test collection for German to Italian bilingual searches, which are executed by using English language as a pivot. The experiments showed that a variation of term co-occurrence based techniques, in which the best sequence algorithm for selecting translations is used with the Cosine coefficient, is dominant, and that the PRF method shows comparable high search performance, although statistical tests did not sufficiently support these conclusions. Furthermore, we repeat the same experiments for the case of French to Italian (pivot) and English to Italian (non-pivot) searches on the same CLEF 2003 test collection in order to verity our findings. Again, similar results were observed except that the Dice coefficient outperforms slightly the Cosine coefficient in the case of disambiguation based on term co-occurrence for English to Italian searches.
    Source
    Information processing and management. 43(2007) no.1, S.103-120
  8. Andrushchenko, M.; Sandberg, K.; Turunen, R.; Marjanen, J.; Hatavara, M.; Kurunmäki, J.; Nummenmaa, T.; Hyvärinen, M.; Teräs, K.; Peltonen, J.; Nummenmaa, J.: Using parsed and annotated corpora to analyze parliamentarians' talk in Finland (2022) 0.03
    0.025179792 = product of:
      0.08812927 = sum of:
        0.062419478 = weight(_text_:case in 471) [ClassicSimilarity], result of:
          0.062419478 = score(doc=471,freq=4.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.34346986 = fieldWeight in 471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=471)
        0.025709787 = product of:
          0.051419575 = sum of:
            0.051419575 = weight(_text_:studies in 471) [ClassicSimilarity], result of:
              0.051419575 = score(doc=471,freq=4.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.3117402 = fieldWeight in 471, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=471)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    We present a search system for grammatically analyzed corpora of Finnish parliamentary records and interviews with former parliamentarians, annotated with metadata of talk structure and involved parliamentarians, and discuss their use through carefully chosen digital humanities case studies. We first introduce the construction, contents, and principles of use of the corpora. Then we discuss the application of the search system and the corpora to study how politicians talk about power, how ideological terms are used in political speech, and how to identify narratives in the data. All case studies stem from questions in the humanities and the social sciences, but rely on the grammatically parsed corpora in both identifying and quantifying passages of interest. Finally, the paper discusses the role of natural language processing methods for questions in the (digital) humanities. It makes the claim that a digital humanities inquiry of parliamentary speech and interviews with politicians cannot only rely on computational humanities modeling, but needs to accommodate a range of perspectives starting with simple searches, quantitative exploration, and ending with modeling. Furthermore, the digital humanities need a more thorough discussion about how the utilization of tools from information science and technologies alter the research questions posed in the humanities.
  9. Airio, E.; Kettunen, K.: Does dictionary based bilingual retrieval work in a non-normalized index? (2009) 0.02
    0.024027621 = product of:
      0.08409667 = sum of:
        0.031131983 = weight(_text_:management in 4224) [ClassicSimilarity], result of:
          0.031131983 = score(doc=4224,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 4224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4224)
        0.052964687 = weight(_text_:case in 4224) [ClassicSimilarity], result of:
          0.052964687 = score(doc=4224,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.29144385 = fieldWeight in 4224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4224)
      0.2857143 = coord(2/7)
    
    Abstract
    Many operational IR indexes are non-normalized, i.e. no lemmatization or stemming techniques, etc. have been employed in indexing. This poses a challenge for dictionary-based cross-language retrieval (CLIR), because translations are mostly lemmas. In this study, we face the challenge of dictionary-based CLIR in a non-normalized index. We test two optional approaches: FCG (Frequent Case Generation) and s-gramming. The idea of FCG is to automatically generate the most frequent inflected forms for a given lemma. FCG has been tested in monolingual retrieval and has been shown to be a good method for inflected retrieval, especially for highly inflected languages. S-gramming is an approximate string matching technique (an extension of n-gramming). The language pairs in our tests were English-Finnish, English-Swedish, Swedish-Finnish and Finnish-Swedish. Both our approaches performed quite well, but the results varied depending on the language pair. S-gramming and FCG performed quite equally in all the other language pairs except Finnish-Swedish, where s-gramming outperformed FCG.
    Source
    Information processing and management. 45(2009) no.6, S.703-713
  10. Kracht, M.: Mathematical linguistics (2002) 0.02
    0.023947554 = product of:
      0.08381643 = sum of:
        0.052964687 = weight(_text_:case in 3572) [ClassicSimilarity], result of:
          0.052964687 = score(doc=3572,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.29144385 = fieldWeight in 3572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=3572)
        0.030851744 = product of:
          0.06170349 = sum of:
            0.06170349 = weight(_text_:studies in 3572) [ClassicSimilarity], result of:
              0.06170349 = score(doc=3572,freq=4.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.37408823 = fieldWeight in 3572, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3572)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This book studies language(s) and linguistic theories from a mathematical point of view. Starting with ideas already contained in Montague's work, it develops the mathematical foundations of present day linguistics. It equips the reader with all the background necessary to understand and evaluate theories as diverse as Montague Grammar, Categorial Grammar, HPSG and GB. The mathematical tools are mainly from universal algebra and logic, but no particular knowledge is presupposed beyond a certain mathematical sophistication that is in any case needed in order to fruitfully work within these theories. The presentation focuses an abstract mathematical structures and their computational properties, but plenty of examples from different natural languages are provided to illustrate the main concepts and results. In contrast to books devoted to so-called formal language theory, languages are seen here as semiotic systems, that is, as systems of signs. A language sign correlates form with meaning. Using the principle of compositionality it is possible to gain substantial insight into the interaction between form and meaning in natural languages.
    Series
    Studies in generative grammar; 63
  11. Melby, A.: Some notes on 'The proper place of men and machines in language translation' (1997) 0.02
    0.023255412 = product of:
      0.081393935 = sum of:
        0.061792135 = weight(_text_:case in 330) [ClassicSimilarity], result of:
          0.061792135 = score(doc=330,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.34001783 = fieldWeight in 330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=330)
        0.019601801 = product of:
          0.039203603 = sum of:
            0.039203603 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
              0.039203603 = score(doc=330,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.2708308 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=330)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Responds to Kay, M.: The proper place of men and machines in language translation. Examines the appropriateness of machine translation (MT) under the following special circumstances: controlled domain-specific text and high-quality output; controlled domain-specific text and indicative output; dynamic general text and indicative output and dynamic general text and high-quality output. MT is appropriate in the 1st 3 cases but the 4th case requires human translation. Examines how MT research could be more useful for aiding human translation
    Date
    31. 7.1996 9:22:19
  12. Vilares, J.; Alonso, M.A.; Vilares, M.: Extraction of complex index terms in non-English IR : a shallow parsing based approach (2008) 0.02
    0.023093324 = product of:
      0.08082663 = sum of:
        0.036689393 = weight(_text_:management in 2107) [ClassicSimilarity], result of:
          0.036689393 = score(doc=2107,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.2633291 = fieldWeight in 2107, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2107)
        0.04413724 = weight(_text_:case in 2107) [ClassicSimilarity], result of:
          0.04413724 = score(doc=2107,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.24286987 = fieldWeight in 2107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2107)
      0.2857143 = coord(2/7)
    
    Abstract
    The performance of information retrieval systems is limited by the linguistic variation present in natural language texts. Word-level natural language processing techniques have been shown to be useful in reducing this variation. In this article, we summarize our work on the extension of these techniques for dealing with phrase-level variation in European languages, taking Spanish as a case in point. We propose the use of syntactic dependencies as complex index terms in an attempt to solve the problems deriving from both syntactic and morpho-syntactic variation and, in this way, to obtain more precise index terms. Such dependencies are obtained through a shallow parser based on cascades of finite-state transducers in order to reduce as far as possible the overhead due to this parsing process. The use of different sources of syntactic information, queries or documents, has been also studied, as has the restriction of the dependencies applied to those obtained from noun phrases. Our approaches have been tested using the CLEF corpus, obtaining consistent improvements with regard to classical word-level non-linguistic techniques. Results show, on the one hand, that syntactic information extracted from documents is more useful than that from queries. On the other hand, it has been demonstrated that by restricting dependencies to those corresponding to noun phrases, important reductions of storage and management costs can be achieved, albeit at the expense of a slight reduction in performance.
    Source
    Information processing and management. 44(2008) no.4, S.1517-1537
  13. Tao, J.; Zhou, L.; Hickey, K.: Making sense of the black-boxes : toward interpretable text classification using deep learning models (2023) 0.02
    0.017804801 = product of:
      0.062316805 = sum of:
        0.04413724 = weight(_text_:case in 990) [ClassicSimilarity], result of:
          0.04413724 = score(doc=990,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.24286987 = fieldWeight in 990, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=990)
        0.018179566 = product of:
          0.03635913 = sum of:
            0.03635913 = weight(_text_:studies in 990) [ClassicSimilarity], result of:
              0.03635913 = score(doc=990,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.22043361 = fieldWeight in 990, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=990)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Text classification is a common task in data science. Despite the superior performances of deep learning based models in various text classification tasks, their black-box nature poses significant challenges for wide adoption. The knowledge-to-action framework emphasizes several principles concerning the application and use of knowledge, such as ease-of-use, customization, and feedback. With the guidance of the above principles and the properties of interpretable machine learning, we identify the design requirements for and propose an interpretable deep learning (IDeL) based framework for text classification models. IDeL comprises three main components: feature penetration, instance aggregation, and feature perturbation. We evaluate our implementation of the framework with two distinct case studies: fake news detection and social question categorization. The experiment results provide evidence for the efficacy of IDeL components in enhancing the interpretability of text classification models. Moreover, the findings are generalizable across binary and multi-label, multi-class classification problems. The proposed IDeL framework introduce a unique iField perspective for building trusted models in data science by improving the transparency and access to advanced black-box models.
  14. Campe, P.: Case, semantic roles, and grammatical relations : a comprehensive bibliography (1994) 0.02
    0.015132768 = product of:
      0.105929375 = sum of:
        0.105929375 = weight(_text_:case in 8663) [ClassicSimilarity], result of:
          0.105929375 = score(doc=8663,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.5828877 = fieldWeight in 8663, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.09375 = fieldNorm(doc=8663)
      0.14285715 = coord(1/7)
    
  15. Santana Suárez, O.; Carreras Riudavets, F.J.; Hernández Figueroa, Z.; González Cabrera, A.C.: Integration of an XML electronic dictionary with linguistic tools for natural language processing (2007) 0.02
    0.015127847 = product of:
      0.05294746 = sum of:
        0.031131983 = weight(_text_:management in 921) [ClassicSimilarity], result of:
          0.031131983 = score(doc=921,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 921, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=921)
        0.021815477 = product of:
          0.043630954 = sum of:
            0.043630954 = weight(_text_:studies in 921) [ClassicSimilarity], result of:
              0.043630954 = score(doc=921,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.26452032 = fieldWeight in 921, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.046875 = fieldNorm(doc=921)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This study proposes the codification of lexical information in electronic dictionaries, in accordance with a generic and extendable XML scheme model, and its conjunction with linguistic tools for the processing of natural language. Our approach is different from other similar studies in that we propose XML coding of those items from a dictionary of meanings that are less related to the lexical units. Linguistic information, such as morphology, syllables, phonology, etc., will be included by means of specific linguistic tools. The use of XML as a container for the information allows the use of other XML tools for carrying out searches or for enabling presentation of the information in different resources. This model is particularly important as it combines two parallel paradigms-extendable labelling of documents and computational linguistics-and it is also applicable to other languages. We have included a comparison with the labelling proposal of printed dictionaries carried out by the Text Encoding Initiative (TEI). The proposed design has been validated with a dictionary of more than 145 000 accepted meanings.
    Source
    Information processing and management. 43(2007) no.4, S.946-957
  16. Oard, D.W.; He, D.; Wang, J.: User-assisted query translation for interactive cross-language information retrieval (2008) 0.02
    0.015127847 = product of:
      0.05294746 = sum of:
        0.031131983 = weight(_text_:management in 2030) [ClassicSimilarity], result of:
          0.031131983 = score(doc=2030,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 2030, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2030)
        0.021815477 = product of:
          0.043630954 = sum of:
            0.043630954 = weight(_text_:studies in 2030) [ClassicSimilarity], result of:
              0.043630954 = score(doc=2030,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.26452032 = fieldWeight in 2030, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2030)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Interactive Cross-Language Information Retrieval (CLIR), a process in which searcher and system collaborate to find documents that satisfy an information need regardless of the language in which those documents are written, calls for designs in which synergies between searcher and system can be leveraged so that the strengths of one can cover weaknesses of the other. This paper describes an approach that employs user-assisted query translation to help searchers better understand the system's operation. Supporting interaction and interface designs are introduced, and results from three user studies are presented. The results indicate that experienced searchers presented with this new system evolve new search strategies that make effective use of the new capabilities, that they achieve retrieval effectiveness comparable to results obtained using fully automatic techniques, and that reported satisfaction with support for cross-language searching increased. The paper concludes with a description of a freely available interactive CLIR system that incorporates lessons learned from this research.
    Source
    Information processing and management. 44(2008) no.1, S.181-211
  17. Gencosman, B.C.; Ozmutlu, H.C.; Ozmutlu, S.: Character n-gram application for automatic new topic identification (2014) 0.01
    0.014758031 = product of:
      0.051653106 = sum of:
        0.025943318 = weight(_text_:management in 2688) [ClassicSimilarity], result of:
          0.025943318 = score(doc=2688,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.18620178 = fieldWeight in 2688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2688)
        0.025709787 = product of:
          0.051419575 = sum of:
            0.051419575 = weight(_text_:studies in 2688) [ClassicSimilarity], result of:
              0.051419575 = score(doc=2688,freq=4.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.3117402 = fieldWeight in 2688, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2688)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The widespread availability of the Internet and the variety of Internet-based applications have resulted in a significant increase in the amount of web pages. Determining the behaviors of search engine users has become a critical step in enhancing search engine performance. Search engine user behaviors can be determined by content-based or content-ignorant algorithms. Although many content-ignorant studies have been performed to automatically identify new topics, previous results have demonstrated that spelling errors can cause significant errors in topic shift estimates. In this study, we focused on minimizing the number of wrong estimates that were based on spelling errors. We developed a new hybrid algorithm combining character n-gram and neural network methodologies, and compared the experimental results with results from previous studies. For the FAST and Excite datasets, the proposed algorithm improved topic shift estimates by 6.987% and 2.639%, respectively. Moreover, we analyzed the performance of the character n-gram method in different aspects including the comparison with Levenshtein edit-distance method. The experimental results demonstrated that the character n-gram method outperformed to the Levensthein edit distance method in terms of topic identification.
    Source
    Information processing and management. 50(2014) no.6, S.821-856
  18. Semantic role universals and argument linking : theoretical, typological, and psycholinguistic perspectives (2006) 0.01
    0.014243842 = product of:
      0.049853444 = sum of:
        0.03530979 = weight(_text_:case in 3670) [ClassicSimilarity], result of:
          0.03530979 = score(doc=3670,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.1942959 = fieldWeight in 3670, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=3670)
        0.014543652 = product of:
          0.029087303 = sum of:
            0.029087303 = weight(_text_:studies in 3670) [ClassicSimilarity], result of:
              0.029087303 = score(doc=3670,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.17634688 = fieldWeight in 3670, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3670)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Content
    Inhalt: Argument hierarchy and other factors determining argument realization / Dieter Wunderlich - Mismatches in semantic-role hierarchies and the dimensions of role semantics / Beatrice Primus - Thematic roles : universal, particular, and idiosyncratic aspects / Manfred Bierwisch - Experiencer constructions in Daghestanian languages / Bernard Comrie and Helma van den Berg - Clause-level vs. predicate-level linking / Balthasar Bickel - From meaning to syntax semantic roles and beyond / Walter Bisang - Meaning, form and function in basic case roles / Georg Bossong - Semantic macroroles and language processing / Robert D. Van Valin, Jr. - Thematic roles as event structure relations / Maria Mercedes Pinango - Generalised semantic roles and syntactic templates: Anew framework for language comprehension / Ina Bornkessel and Matthias Schlesewsky
    Series
    Trends in linguistics. Studies and monographs; 165
  19. Pepper, S.: ¬The typology and semantics of binominal lexemes : noun-noun compounds and their functional equivalents (2020) 0.01
    0.014243842 = product of:
      0.049853444 = sum of:
        0.03530979 = weight(_text_:case in 104) [ClassicSimilarity], result of:
          0.03530979 = score(doc=104,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.1942959 = fieldWeight in 104, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=104)
        0.014543652 = product of:
          0.029087303 = sum of:
            0.029087303 = weight(_text_:studies in 104) [ClassicSimilarity], result of:
              0.029087303 = score(doc=104,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.17634688 = fieldWeight in 104, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.03125 = fieldNorm(doc=104)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The dissertation establishes 'binominal lexeme' as a comparative concept and discusses its cross-linguistic typology and semantics. Informally, a binominal lexeme is a noun-noun compound or functional equivalent; more precisely, it is a lexical item that consists primarily of two thing-morphs between which there exists an unstated semantic relation. Examples of binominals include Mandarin Chinese ?? (tielù) [iron road], French chemin de fer [way of iron] and Russian ???????? ?????? (zeleznaja doroga) [iron:adjz road]. All of these combine a word denoting 'iron' and a word denoting 'road' or 'way' to denote the meaning railway. In each case, the unstated semantic relation is one of composition: a railway is conceptualized as a road that is composed (or made) of iron. However, three different morphosyntactic strategies are employed: compounding, prepositional phrase and relational adjective. This study explores the range of such strategies used by a worldwide sample of 106 languages to express a set of 100 meanings from various semantic domains, resulting in a classification consisting of nine different morphosyntactic types. The semantic relations found in the data are also explored and a classification called the Hatcher-Bourque system is developed that operates at two levels of granularity, together with a tool for classifying binominals, the Bourquifier. The classification is extended to other subfields of language, including metonymy and lexical semantics, and beyond language to the domain of knowledge representation, resulting in a proposal for a general model of associative relations called the PHAB model. The many findings of the research include universals concerning the recruitment of anchoring nominal modification strategies, a method for comparing non-binary typologies, the non-universality (despite its predominance) of compounding, and a scale of frequencies for semantic relations which may provide insights into the associative nature of human thought.
    Imprint
    Oslo : University of Oslo / Faculty of Humanities / Department of Linguistics and Scandinavian Studies
  20. SIGIR'92 : Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1992) 0.01
    0.014016111 = product of:
      0.04905639 = sum of:
        0.018160323 = weight(_text_:management in 6671) [ClassicSimilarity], result of:
          0.018160323 = score(doc=6671,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.13034125 = fieldWeight in 6671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6671)
        0.030896068 = weight(_text_:case in 6671) [ClassicSimilarity], result of:
          0.030896068 = score(doc=6671,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.17000891 = fieldWeight in 6671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6671)
      0.2857143 = coord(2/7)
    
    Content
    HARMAN, D.: Relevance feedback revisited; AALBERSBERG, I.J.: Incremental relevance feedback; TAGUE-SUTCLIFFE, J.: Measuring the informativeness of a retrieval process; LEWIS, D.D.: An evaluation of phrasal and clustered representations on a text categorization task; BLOSSEVILLE, M.J., G. HÉBRAIL, M.G. MONTEIL u. N. PÉNOT: Automatic document classification: natural language processing, statistical analysis, and expert system techniques used together; MASAND, B., G. LINOFF u. D. WALTZ: Classifying news stories using memory based reasoning; KEEN, E.M.: Term position ranking: some new test results; CROUCH, C.J. u. B. YANG: Experiments in automatic statistical thesaurus construction; GREFENSTETTE, G.: Use of syntactic context to produce term association lists for text retrieval; ANICK, P.G. u. R.A. FLYNN: Versioning of full-text information retrieval system; BURKOWSKI, F.J.: Retrieval activities in a database consisting of heterogeneous collections; DEERWESTER, S.C., K. WACLENA u. M. LaMAR: A textual object management system; NIE, J.-Y.:Towards a probabilistic modal logic for semantic-based information retrieval; WANG, A.W., S.K.M. WONG u. Y.Y. YAO: An analysis of vector space models based on computational geometry; BARTELL, B.T., G.W. COTTRELL u. R.K. BELEW: Latent semantic indexing is an optimal special case of multidimensional scaling; GLAVITSCH, U. u. P. SCHÄUBLE: A system for retrieving speech documents; MARGULIS, E.L.: N-Poisson document modelling; HESS, M.: An incrementally extensible document retrieval system based on linguistics and logical principles; COOPER, W.S., F.C. GEY u. D.P. DABNEY: Probabilistic retrieval based on staged logistic regression; FUHR, N.: Integration of probabilistic fact and text retrieval; CROFT, B., L.A. SMITH u. H. TURTLE: A loosely-coupled integration of a text retrieval system and an object-oriented database system; DUMAIS, S.T. u. J. NIELSEN: Automating the assignement of submitted manuscripts to reviewers; GOST, M.A. u. M. MASOTTI: Design of an OPAC database to permit different subject searching accesses; ROBERTSON, A.M. u. P. WILLETT: Searching for historical word forms in a database of 17th century English text using spelling correction methods; FAX, E.A., Q.F. CHEN u. L.S. HEATH: A faster algorithm for constructing minimal perfect hash functions; MOFFAT, A. u. J. ZOBEL: Parameterised compression for sparse bitmaps; GRANDI, F., P. TIBERIO u. P. Zezula: Frame-sliced patitioned parallel signature files; ALLEN, B.: Cognitive differences in end user searching of a CD-ROM index; SONNENWALD, D.H.: Developing a theory to guide the process of designing information retrieval systems; CUTTING, D.R., J.O. PEDERSEN, D. KARGER, u. J.W. TUKEY: Scatter/ Gather: a cluster-based approach to browsing large document collections; CHALMERS, M. u. P. CHITSON: Bead: Explorations in information visualization; WILLIAMSON, C. u. B. SHNEIDERMAN: The dynamic HomeFinder: evaluating dynamic queries in a real-estate information exploring system

Years

Languages

  • e 151
  • d 25
  • m 3
  • f 1
  • ru 1
  • More… Less…

Types

  • a 146
  • m 19
  • s 10
  • el 7
  • x 6
  • p 4
  • b 1
  • d 1
  • More… Less…

Classifications