Search (176 results, page 1 of 9)

  • × theme_ss:"Computerlinguistik"
  1. Way, E.C.: Knowledge representation and metaphor (oder: meaning) (1994) 0.11
    0.10666951 = product of:
      0.21333902 = sum of:
        0.1387685 = weight(_text_:representation in 771) [ClassicSimilarity], result of:
          0.1387685 = score(doc=771,freq=6.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.7043805 = fieldWeight in 771, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=771)
        0.07457052 = product of:
          0.111855775 = sum of:
            0.06544521 = weight(_text_:theory in 771) [ClassicSimilarity], result of:
              0.06544521 = score(doc=771,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.36755344 = fieldWeight in 771, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=771)
            0.04641057 = weight(_text_:22 in 771) [ClassicSimilarity], result of:
              0.04641057 = score(doc=771,freq=2.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.30952093 = fieldWeight in 771, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=771)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Content
    Enthält folgende 9 Kapitel: The literal and the metaphoric; Views of metaphor; Knowledge representation; Representation schemes and conceptual graphs; The dynamic type hierarchy theory of metaphor; Computational approaches to metaphor; Thenature and structure of semantic hierarchies; Language games, open texture and family resemblance; Programming the dynamic type hierarchy; Subject index
    Footnote
    Bereits 1991 bei Kluwer publiziert // Rez. in: Knowledge organization 22(1995) no.1, S.48-49 (O. Sechser)
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10477377 = product of:
      0.13969836 = sum of:
        0.06800719 = product of:
          0.20402157 = sum of:
            0.20402157 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20402157 = score(doc=562,freq=2.0), product of:
                0.36301607 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042818543 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.060088523 = weight(_text_:representation in 562) [ClassicSimilarity], result of:
          0.060088523 = score(doc=562,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.3050057 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.011602643 = product of:
          0.034807928 = sum of:
            0.034807928 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.034807928 = score(doc=562,freq=2.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.75 = coord(3/4)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Hodgson, J.P.E.: Knowledge representation and language in AI (1991) 0.09
    0.09267894 = product of:
      0.18535788 = sum of:
        0.1660759 = weight(_text_:representation in 1529) [ClassicSimilarity], result of:
          0.1660759 = score(doc=1529,freq=22.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.84299123 = fieldWeight in 1529, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1529)
        0.01928198 = product of:
          0.057845935 = sum of:
            0.057845935 = weight(_text_:theory in 1529) [ClassicSimilarity], result of:
              0.057845935 = score(doc=1529,freq=4.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.3248744 = fieldWeight in 1529, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1529)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The aim of this book is to highlight the relationship between knowledge representation and language in artificial intelligence, and in particular on the way in which the choice of representation influences the language used to discuss a problem - and vice versa. Opening with a discussion of knowledge representation methods, and following this with a look at reasoning methods, the author begins to make his case for the intimate relationship between language and representation. He shows how each representation method fits particularly well with some reasoning methods and less so with others, using specific languages as examples. The question of representation change, an important and complex issue about which very little is known, is addressed. Dr Hodgson gathers together recent work on problem solving, showing how, in some cases, it has been possible to use representation changes to recast problems into a language that makes them easier to solve. The author maintains throughout that the relationships that this book explores lie at the heart of the construction of large systems, examining a number of the current large AI systems from the viewpoint of representation and language to prove his point.
    LCSH
    Knowledge / representation (Information theory)
    Subject
    Knowledge / representation (Information theory)
  4. Griffiths, T.L.; Steyvers, M.: ¬A probabilistic approach to semantic representation (2002) 0.08
    0.080422625 = product of:
      0.16084525 = sum of:
        0.1387685 = weight(_text_:representation in 3671) [ClassicSimilarity], result of:
          0.1387685 = score(doc=3671,freq=6.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.7043805 = fieldWeight in 3671, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=3671)
        0.022076748 = product of:
          0.066230245 = sum of:
            0.066230245 = weight(_text_:29 in 3671) [ClassicSimilarity], result of:
              0.066230245 = score(doc=3671,freq=4.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.43971092 = fieldWeight in 3671, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3671)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Semantic networks produced from human data have statistical properties that cannot be easily captured by spatial representations. We explore a probabilistic approach to semantic representation that explicitly models the probability with which words occurin diffrent contexts, and hence captures the probabilistic relationships between words. We show that this representation has statistical properties consistent with the large-scale structure of semantic networks constructed by humans, and trace the origins of these properties.
    Date
    29. 6.2015 14:55:01
    29. 6.2015 16:09:05
  5. Sembok, T.M.T.; Rijsbergen, C.J. van: SILOL: a simple logical-linguistic document retrieval system (1990) 0.05
    0.05096655 = product of:
      0.1019331 = sum of:
        0.08011803 = weight(_text_:representation in 6684) [ClassicSimilarity], result of:
          0.08011803 = score(doc=6684,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.40667427 = fieldWeight in 6684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=6684)
        0.021815069 = product of:
          0.06544521 = sum of:
            0.06544521 = weight(_text_:theory in 6684) [ClassicSimilarity], result of:
              0.06544521 = score(doc=6684,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.36755344 = fieldWeight in 6684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6684)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Describes a system called SILOL which is based on a logical-linguistic model of document retrieval systems. SILOL uses a shallow semantic translation of natural language texts into a first order predicate representation in performing a document indexing and retrieval process. Some preliminary experiments have been carried out to test the retrieval effectiveness of this system. The results obtained show improvements in the level of retrieval effectiveness, which demonstrate that the approach of using a semantic theory of natural language and logic in document retrieval systems is a valid one
  6. Ghenima, M.: ¬A system of 'computer-aided diacritisation' using a lexical database of Arabic language (1998) 0.05
    0.05096655 = product of:
      0.1019331 = sum of:
        0.08011803 = weight(_text_:representation in 74) [ClassicSimilarity], result of:
          0.08011803 = score(doc=74,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.40667427 = fieldWeight in 74, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=74)
        0.021815069 = product of:
          0.06544521 = sum of:
            0.06544521 = weight(_text_:theory in 74) [ClassicSimilarity], result of:
              0.06544521 = score(doc=74,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.36755344 = fieldWeight in 74, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=74)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The aim of research in Natural language processing (NLP) area, is to design and develop systems that process, understand, and interpret natural language. It employs knowledge from various fields like artificial intelligence (in knowledge representation, reasoning), formal language theory (in language analysis, parsing), and theoretical and computational linguistics (in models of language structures)
  7. Conceptual structures : theory, tools and applications. 6th International Conference on Conceptual Structures, ICCS'98, Montpellier, France, August, 10-12, 1998, Proceedings (1998) 0.05
    0.05096655 = product of:
      0.1019331 = sum of:
        0.08011803 = weight(_text_:representation in 1378) [ClassicSimilarity], result of:
          0.08011803 = score(doc=1378,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.40667427 = fieldWeight in 1378, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=1378)
        0.021815069 = product of:
          0.06544521 = sum of:
            0.06544521 = weight(_text_:theory in 1378) [ClassicSimilarity], result of:
              0.06544521 = score(doc=1378,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.36755344 = fieldWeight in 1378, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1378)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 6th International Conference on Conceptual Structures, ICCS'98, held in Montpellier, France, in August 1998. The 20 revised full papers and 10 research reports presented were carefully selected from a total of 66 submissions; also included are three invited contributions. The volume is divided in topical sections on knowledge representation and knowledge engineering, tools, conceptual graphs and other models, relationships with logics, algorithms and complexity, natural language processing, and applications.
  8. Dorr, B.J.: Large-scale dictionary construction for foreign language tutoring and interlingual machine translation (1997) 0.05
    0.04829032 = product of:
      0.09658064 = sum of:
        0.084978 = weight(_text_:representation in 3244) [ClassicSimilarity], result of:
          0.084978 = score(doc=3244,freq=4.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.4313432 = fieldWeight in 3244, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=3244)
        0.011602643 = product of:
          0.034807928 = sum of:
            0.034807928 = weight(_text_:22 in 3244) [ClassicSimilarity], result of:
              0.034807928 = score(doc=3244,freq=2.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.23214069 = fieldWeight in 3244, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3244)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Describes techniques for automatic construction of dictionaries for use in large-scale foreign language tutoring (FLT) and interlingual machine translation (MT) systems. The dictionaries are based on a language independent representation called lexical conceptual structure (LCS). Demonstrates that synonymous verb senses share distribution patterns. Shows how the syntax-semantics relation can be used to develop a lexical acquisition approach that contributes both toward the enrichment of existing online resources and toward the development of lexicons containing more complete information than is provided in any of these resources alone. Describes the structure of the LCS and shows how this representation is used in FLT and MT. Focuses on the problem of building LCS dictionaries for large-scale FLT and MT. Describes authoring tools for manual and semi-automatic construction of LCS dictionaries. Presents an approach that uses linguistic techniques for building word definitions automatically. The techniques have been implemented as part of a set of lixicon-development tools used in the MILT FLT project
    Date
    31. 7.1996 9:22:19
  9. Stoykova, V.; Petkova, E.: Automatic extraction of mathematical terms for precalculus (2012) 0.04
    0.041881282 = product of:
      0.083762564 = sum of:
        0.07010327 = weight(_text_:representation in 156) [ClassicSimilarity], result of:
          0.07010327 = score(doc=156,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.35583997 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.013659291 = product of:
          0.040977873 = sum of:
            0.040977873 = weight(_text_:29 in 156) [ClassicSimilarity], result of:
              0.040977873 = score(doc=156,freq=2.0), product of:
                0.15062225 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042818543 = queryNorm
                0.27205724 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In this work, we present the results of research for evaluating a methodology for extracting mathematical terms for precalculus using the techniques for semantically-oriented statistical search. We use the corpus-based approach and the combination of different statistically-based techniques for extracting keywords, collocations and co-occurrences incorporated in the Sketch Engine software. We evaluate the collocations candidate terms for the basic concept function(s) and approve the related methodology by precalculus domain conceptual terms definitions. Finally, we offer a conceptual terms hierarchical representation and discuss the results with respect to their possible applications.
    Date
    29. 5.2012 10:17:08
  10. Warner, J.: Linguistics and information theory : analytic advantages (2007) 0.04
    0.04161345 = product of:
      0.0832269 = sum of:
        0.060088523 = weight(_text_:representation in 77) [ClassicSimilarity], result of:
          0.060088523 = score(doc=77,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.3050057 = fieldWeight in 77, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=77)
        0.023138374 = product of:
          0.06941512 = sum of:
            0.06941512 = weight(_text_:theory in 77) [ClassicSimilarity], result of:
              0.06941512 = score(doc=77,freq=4.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.3898493 = fieldWeight in 77, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=77)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The analytic advantages of central concepts from linguistics and information theory, and the analogies demonstrated between them, for understanding patterns of retrieval from full-text indexes to documents are developed. The interaction between the syntagm and the paradigm in computational operations on written language in indexing, searching, and retrieval is used to account for transformations of the signified or meaning between documents and their representation and between queries and documents retrieved. Characteristics of the message, and messages for selection for written language, are brought to explain the relative frequency of occurrence of words and multiple word sequences in documents. The examples given in the companion article are revisited and a fuller example introduced. The signified of the sequence stood for, the term classically used in the definitions of the sign, as something standing for something else, can itself change rapidly according to its syntagm. A greater than ordinary discourse understanding of patterns in retrieval is obtained.
  11. Nielsen, R.D.; Ward, W.; Martin, J.H.; Palmer, M.: Extracting a representation from text for semantic analysis (2008) 0.04
    0.040059015 = product of:
      0.16023606 = sum of:
        0.16023606 = weight(_text_:representation in 3365) [ClassicSimilarity], result of:
          0.16023606 = score(doc=3365,freq=8.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.81334853 = fieldWeight in 3365, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=3365)
      0.25 = coord(1/4)
    
    Abstract
    We present a novel fine-grained semantic representation of text and an approach to constructing it. This representation is largely extractable by today's technologies and facilitates more detailed semantic analysis. We discuss the requirements driving the representation, suggest how it might be of value in the automated tutoring domain, and provide evidence of its validity.
  12. Montgomery, C.A.: Linguistics and information science (1972) 0.04
    0.038224913 = product of:
      0.07644983 = sum of:
        0.060088523 = weight(_text_:representation in 6669) [ClassicSimilarity], result of:
          0.060088523 = score(doc=6669,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.3050057 = fieldWeight in 6669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=6669)
        0.016361302 = product of:
          0.049083903 = sum of:
            0.049083903 = weight(_text_:theory in 6669) [ClassicSimilarity], result of:
              0.049083903 = score(doc=6669,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.27566507 = fieldWeight in 6669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6669)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This paper defines the relationship between linguistics and information science in terms of a common interest in natural language. The notion of automated processing of natural language - i.e., machine simulation of the language processing activities of a human - provides novel possibilities for interaction between linguistics, who have a theoretical interest in such activities, and information scientists, who have more practical goals, e.g. simulating the language processing activities of an indexer with a machine. The concept of a natural language information system is introduces as a framenwork for reviewing automated language processing efforts by computational linguists and information scientists. In terms of this framework, the former have concentrated on automating the operations of the component for content analysis and representation, while the latter have emphasized the data management component. The complementary nature of these developments allows the postulation of an integrated approach to automated language processing. This approach, which is outlined in the final sections of the paper, incorporates current notions in linguistic theory and information science, as well as design features of recent computational linguistic models
  13. Conceptual structures : logical, linguistic, and computational issues. 8th International Conference on Conceptual Structures, ICCS 2000, Darmstadt, Germany, August 14-18, 2000 (2000) 0.04
    0.037680827 = product of:
      0.075361654 = sum of:
        0.067181006 = weight(_text_:representation in 691) [ClassicSimilarity], result of:
          0.067181006 = score(doc=691,freq=10.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.34100673 = fieldWeight in 691, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0234375 = fieldNorm(doc=691)
        0.008180651 = product of:
          0.024541952 = sum of:
            0.024541952 = weight(_text_:theory in 691) [ClassicSimilarity], result of:
              0.024541952 = score(doc=691,freq=2.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.13783254 = fieldWeight in 691, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=691)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Computer scientists create models of a perceived reality. Through AI techniques, these models aim at providing the basic support for emulating cognitive behavior such as reasoning and learning, which is one of the main goals of the Al research effort. Such computer models are formed through the interaction of various acquisition and inference mechanisms: perception, concept learning, conceptual clustering, hypothesis testing, probabilistic inference, etc., and are represented using different paradigms tightly linked to the processes that use them. Among these paradigms let us cite: biological models (neural nets, genetic programming), logic-based models (first-order logic, modal logic, rule-based systems), virtual reality models (object systems, agent systems), probabilistic models (Bayesian nets, fuzzy logic), linguistic models (conceptual dependency graphs, language-based rep resentations), etc. One of the strengths of the Conceptual Graph (CG) theory is its versatility in terms of the representation paradigms under which it falls. It can be viewed and therefore used, under different representation paradigms, which makes it a popular choice for a wealth of applications. Its full coupling with different cognitive processes lead to the opening of the field toward related research communities such as the Description Logic, Formal Concept Analysis, and Computational Linguistic communities. We now see more and more research results from one community enrich the other, laying the foundations of common philosophical grounds from which a successful synergy can emerge. ICCS 2000 embodies this spirit of research collaboration. It presents a set of papers that we believe, by their exposure, will benefit the whole community. For instance, the technical program proposes tracks on Conceptual Ontologies, Language, Formal Concept Analysis, Computational Aspects of Conceptual Structures, and Formal Semantics, with some papers on pragmatism and human related aspects of computing. Never before was the program of ICCS formed by so heterogeneously rooted theories of knowledge representation and use. We hope that this swirl of ideas will benefit you as much as it already has benefited us while putting together this program
    Content
    Concepts and Language: The Role of Conceptual Structure in Human Evolution (Keith Devlin) - Concepts in Linguistics - Concepts in Natural Language (Gisela Harras) - Patterns, Schemata, and Types: Author Support through Formalized Experience (Felix H. Gatzemeier) - Conventions and Notations for Knowledge Representation and Retrieval (Philippe Martin) - Conceptual Ontology: Ontology, Metadata, and Semiotics (John F. Sowa) - Pragmatically Yours (Mary Keeler) - Conceptual Modeling for Distributed Ontology Environments (Deborah L. McGuinness) - Discovery of Class Relations in Exception Structured Knowledge Bases (Hendra Suryanto, Paul Compton) - Conceptual Graphs: Perspectives: CGs Applications: Where Are We 7 Years after the First ICCS ? (Michel Chein, David Genest) - The Engineering of a CC-Based System: Fundamental Issues (Guy W. Mineau) - Conceptual Graphs, Metamodeling, and Notation of Concepts (Olivier Gerbé, Guy W. Mineau, Rudolf K. Keller) - Knowledge Representation and Reasonings: Based on Graph Homomorphism (Marie-Laure Mugnier) - User Modeling Using Conceptual Graphs for Intelligent Agents (James F. Baldwin, Trevor P. Martin, Aimilia Tzanavari) - Towards a Unified Querying System of Both Structured and Semi-structured Imprecise Data Using Fuzzy View (Patrice Buche, Ollivier Haemmerlé) - Formal Semantics of Conceptual Structures: The Extensional Semantics of the Conceptual Graph Formalism (Guy W. Mineau) - Semantics of Attribute Relations in Conceptual Graphs (Pavel Kocura) - Nested Concept Graphs and Triadic Power Context Families (Susanne Prediger) - Negations in Simple Concept Graphs (Frithjof Dau) - Extending the CG Model by Simulations (Jean-François Baget) - Contextual Logic and Formal Concept Analysis: Building and Structuring Description Logic Knowledge Bases: Using Least Common Subsumers and Concept Analysis (Franz Baader, Ralf Molitor) - On the Contextual Logic of Ordinal Data (Silke Pollandt, Rudolf Wille) - Boolean Concept Logic (Rudolf Wille) - Lattices of Triadic Concept Graphs (Bernd Groh, Rudolf Wille) - Formalizing Hypotheses with Concepts (Bernhard Ganter, Sergei 0. Kuznetsov) - Generalized Formal Concept Analysis (Laurent Chaudron, Nicolas Maille) - A Logical Generalization of Formal Concept Analysis (Sébastien Ferré, Olivier Ridoux) - On the Treatment of Incomplete Knowledge in Formal Concept Analysis (Peter Burmeister, Richard Holzer) - Conceptual Structures in Practice: Logic-Based Networks: Concept Graphs and Conceptual Structures (Peter W. Eklund) - Conceptual Knowledge Discovery and Data Analysis (Joachim Hereth, Gerd Stumme, Rudolf Wille, Uta Wille) - CEM - A Conceptual Email Manager (Richard Cole, Gerd Stumme) - A Contextual-Logic Extension of TOSCANA (Peter Eklund, Bernd Groh, Gerd Stumme, Rudolf Wille) - A Conceptual Graph Model for W3C Resource Description Framework (Olivier Corby, Rose Dieng, Cédric Hébert) - Computational Aspects of Conceptual Structures: Computing with Conceptual Structures (Bernhard Ganter) - Symmetry and the Computation of Conceptual Structures (Robert Levinson) An Introduction to SNePS 3 (Stuart C. Shapiro) - Composition Norm Dynamics Calculation with Conceptual Graphs (Aldo de Moor) - From PROLOG++ to PROLOG+CG: A CG Object-Oriented Logic Programming Language (Adil Kabbaj, Martin Janta-Polczynski) - A Cost-Bounded Algorithm to Control Events Generalization (Gaël de Chalendar, Brigitte Grau, Olivier Ferret)
  14. Hoenkamp, E.; Bruza, P.D.; Song, D.; Huang, Q.: ¬An effective approach to verbose queries using a limited dependencies language model (2009) 0.04
    0.036038794 = product of:
      0.07207759 = sum of:
        0.056652002 = weight(_text_:representation in 2122) [ClassicSimilarity], result of:
          0.056652002 = score(doc=2122,freq=4.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.28756213 = fieldWeight in 2122, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=2122)
        0.015425583 = product of:
          0.04627675 = sum of:
            0.04627675 = weight(_text_:theory in 2122) [ClassicSimilarity], result of:
              0.04627675 = score(doc=2122,freq=4.0), product of:
                0.1780563 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.042818543 = queryNorm
                0.25989953 = fieldWeight in 2122, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2122)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Intuitively, any 'bag of words' approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distributions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document's initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur's search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.
    Series
    Lecture notes in computer science : advances in information retrieval theory; 5766
    Source
    Second International Conference on the Theory of Information Retrieval, ICTIR 2009 Cambridge, UK, September 10-12, 2009 Proceedings. Ed.: L. Azzopardi
  15. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.04
    0.03584558 = product of:
      0.07169116 = sum of:
        0.060088523 = weight(_text_:representation in 563) [ClassicSimilarity], result of:
          0.060088523 = score(doc=563,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.3050057 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.011602643 = product of:
          0.034807928 = sum of:
            0.034807928 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.034807928 = score(doc=563,freq=2.0), product of:
                0.14994325 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042818543 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Date
    10. 1.2013 19:22:47
  16. L'Homme, M.-C.: Processing word combinations in existing terms banks (1995) 0.03
    0.030044261 = product of:
      0.120177045 = sum of:
        0.120177045 = weight(_text_:representation in 2949) [ClassicSimilarity], result of:
          0.120177045 = score(doc=2949,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.6100114 = fieldWeight in 2949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.09375 = fieldNorm(doc=2949)
      0.25 = coord(1/4)
    
    Abstract
    How specific can word combinations be stored in computerized reference tools? The focus of this paper is on word lexical groups in special languages and their representation for translation purposes
  17. Rahmstorf, G.: Compositional semantics and concept representation (1991) 0.03
    0.028326001 = product of:
      0.113304004 = sum of:
        0.113304004 = weight(_text_:representation in 6673) [ClassicSimilarity], result of:
          0.113304004 = score(doc=6673,freq=4.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.57512426 = fieldWeight in 6673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0625 = fieldNorm(doc=6673)
      0.25 = coord(1/4)
    
    Abstract
    Concept systems are not only used in the sciences, but also in secondary supporting fields, e.g. in libraries, in documentation, in terminology and increasingly also in knowledge representation. It is suggested that the development of concept systems be based on semantic analysis. Methodical steps are described. The principle of morpho-syntactic composition in semantics will serve as a theoretical basis for the suggested method. The implications and limitations of this principle will be demonstrated
  18. Chowdhury, G.G.: Natural language processing and information retrieval : pt.1: basic issues; pt.2: major applications (1991) 0.03
    0.025036886 = product of:
      0.100147545 = sum of:
        0.100147545 = weight(_text_:representation in 3313) [ClassicSimilarity], result of:
          0.100147545 = score(doc=3313,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.50834286 = fieldWeight in 3313, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.078125 = fieldNorm(doc=3313)
      0.25 = coord(1/4)
    
    Abstract
    Reviews the basic issues and procedures involved in natural language processing of textual material for final use in information retrieval. Covers: natural language processing; natural language understanding; syntactic and semantic analysis; parsing; knowledge bases and knowledge representation
  19. ¬The semantics of relationships : an interdisciplinary perspective (2002) 0.03
    0.025036886 = product of:
      0.100147545 = sum of:
        0.100147545 = weight(_text_:representation in 1430) [ClassicSimilarity], result of:
          0.100147545 = score(doc=1430,freq=8.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.50834286 = fieldWeight in 1430, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1430)
      0.25 = coord(1/4)
    
    Abstract
    Work on relationships takes place in many communities, including, among others, data modeling, knowledge representation, natural language processing, linguistics, and information retrieval. Unfortunately, continued disciplinary splintering and specialization keeps any one person from being familiar with the full expanse of that work. By including contributions form experts in a variety of disciplines and backgrounds, this volume demonstrates both the parallels that inform work on relationships across a number of fields and the singular emphases that have yet to be fully embraced, The volume is organized into 3 parts: (1) Types of relationships (2) Relationships in knowledge representation and reasoning (3) Applications of relationships
    Content
    Enthält die Beiträge: Pt.1: Types of relationships: CRUDE, D.A.: Hyponymy and its varieties; FELLBAUM, C.: On the semantics of troponymy; PRIBBENOW, S.: Meronymic relationships: from classical mereology to complex part-whole relations; KHOO, C. u.a.: The many facets of cause-effect relation - Pt.2: Relationships in knowledge representation and reasoning: GREEN, R.: Internally-structured conceptual models in cognitive semantics; HOVY, E.: Comparing sets of semantic relations in ontologies; GUARINO, N., C. WELTY: Identity and subsumption; JOUIS; C.: Logic of relationships - Pt.3: Applications of relationships: EVENS, M.: Thesaural relations in information retrieval; KHOO, C., S.H. MYAENG: Identifying semantic relations in text for information retrieval and information extraction; McCRAY, A.T., O. BODENREICHER: A conceptual framework for the biiomedical domain; HETZLER, B.: Visual analysis and exploration of relationships
    Footnote
    Mit ausführlicher Einleitung der Herausgeber zu den Themen: Types of relationships - Relationships in knowledge representation and reasoning - Applications of relationships
  20. Rettinger, A.; Schumilin, A.; Thoma, S.; Ell, B.: Learning a cross-lingual semantic representation of relations expressed in text (2015) 0.03
    0.025036886 = product of:
      0.100147545 = sum of:
        0.100147545 = weight(_text_:representation in 2027) [ClassicSimilarity], result of:
          0.100147545 = score(doc=2027,freq=2.0), product of:
            0.19700786 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.042818543 = queryNorm
            0.50834286 = fieldWeight in 2027, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.078125 = fieldNorm(doc=2027)
      0.25 = coord(1/4)
    

Years

Languages

  • e 139
  • d 32
  • ru 2
  • chi 1
  • More… Less…

Types

  • a 139
  • m 23
  • el 14
  • s 12
  • x 4
  • p 2
  • d 1
  • More… Less…

Subjects

Classifications