Search (35 results, page 1 of 2)

  • × theme_ss:"Linguistik"
  1. Rasmussen, L.: Selected linguistic problems in indexing within the Canadian context (1992) 0.07
    0.06635133 = product of:
      0.16587833 = sum of:
        0.0068111527 = weight(_text_:a in 1609) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=1609,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 1609, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1609)
        0.15906717 = weight(_text_:91 in 1609) [ClassicSimilarity], result of:
          0.15906717 = score(doc=1609,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.6156473 = fieldWeight in 1609, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.078125 = fieldNorm(doc=1609)
      0.4 = coord(2/5)
    
    Source
    Indexer. 18(1992) no.2, S.87-91
    Type
    a
  2. Warner, A.J.: Quantitative and qualitative assessments of the impact of linguistic theory on information science (1991) 0.05
    0.05149814 = product of:
      0.12874535 = sum of:
        0.009535614 = weight(_text_:a in 29) [ClassicSimilarity], result of:
          0.009535614 = score(doc=29,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 29, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=29)
        0.11920974 = sum of:
          0.031257942 = weight(_text_:information in 29) [ClassicSimilarity], result of:
            0.031257942 = score(doc=29,freq=4.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.3840108 = fieldWeight in 29, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.109375 = fieldNorm(doc=29)
          0.087951794 = weight(_text_:22 in 29) [ClassicSimilarity], result of:
            0.087951794 = score(doc=29,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.5416616 = fieldWeight in 29, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=29)
      0.4 = coord(2/5)
    
    Date
    6. 1.1999 10:22:45
    Source
    Journal of the American Society for Information Science. 42(1991) no.1, S.64-71
    Type
    a
  3. O'Donnell, R.; Smeaton, A.F.: ¬A linguistic approach to information retrieval (1996) 0.03
    0.027628005 = product of:
      0.06907001 = sum of:
        0.008173384 = weight(_text_:a in 2575) [ClassicSimilarity], result of:
          0.008173384 = score(doc=2575,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 2575, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2575)
        0.060896628 = sum of:
          0.023203006 = weight(_text_:information in 2575) [ClassicSimilarity], result of:
            0.023203006 = score(doc=2575,freq=12.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.2850541 = fieldWeight in 2575, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=2575)
          0.037693623 = weight(_text_:22 in 2575) [ClassicSimilarity], result of:
            0.037693623 = score(doc=2575,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 2575, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2575)
      0.4 = coord(2/5)
    
    Abstract
    An important aspect of information retrieval systems is domain independence, where the subject of the information is not restricted to certain domains of knowledge. This should be able to represent any topic and although the text representation does not involve any semantic knowledge, lexical and syntactic analysis of the text allows the representation to remain domain independent. Reports research at Dublin City University, Ireland, which concentrates on the lexical and syntactic levels of natural language analysis and describes a domain independent automatic information retrieval system which accesses a very large database of newspaper text from the Wall Street Journal. The system represents the text in the form of syntax trees, and these trees are used in the matching process. Reports early results from the stuyd
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
    Type
    a
  4. Storms, G.; VanMechelen, I.; DeBoeck, P.: Structural-analysis of the intension and extension of semantic concepts (1994) 0.01
    0.013059636 = product of:
      0.03264909 = sum of:
        0.010661141 = weight(_text_:a in 2574) [ClassicSimilarity], result of:
          0.010661141 = score(doc=2574,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19940455 = fieldWeight in 2574, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2574)
        0.021987949 = product of:
          0.043975897 = sum of:
            0.043975897 = weight(_text_:22 in 2574) [ClassicSimilarity], result of:
              0.043975897 = score(doc=2574,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2708308 = fieldWeight in 2574, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2574)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A method (HICLAS, DeBoeck & Rosenberg, 1988) for studying the internal structure of semantic concepts is presented. The proposed method reveals the internal structure of the extension as well as the intesion of a concept, together with a correspondence relation that shows the mutual dependence of both structures. Its use is illustrated with the analysis of simple concepts (e.g. sports) and conjunctive concepts (e.g. birds that are also pets). The underlying structure that is revealed can be interpreted as a differentiation of the simple concepts studied and for conjunctive concepts the proposed method is able to extract non-inherited and emergent features (Hampton, 1988)
    Date
    22. 7.2000 19:17:40
    Type
    a
  5. Sharada, B.A.: Infolinguistics : an interdisciplinary study (1995) 0.01
    0.010522025 = product of:
      0.02630506 = sum of:
        0.012184162 = weight(_text_:a in 4225) [ClassicSimilarity], result of:
          0.012184162 = score(doc=4225,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22789092 = fieldWeight in 4225, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4225)
        0.014120899 = product of:
          0.028241798 = sum of:
            0.028241798 = weight(_text_:information in 4225) [ClassicSimilarity], result of:
              0.028241798 = score(doc=4225,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3469568 = fieldWeight in 4225, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4225)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Discusses the importance of the theories and principles of linguistics to the organization of information. Infolinguistics studies information or knowledge using linguistics as a representation mechanism. Theoretical studies of search languages and index languages require a theoretical framework provided by the interdisciplinary approach of infolinguistics which is a blend of cognitive psychology, linguistics and information science. Discusses how infolinguistics can contribute to information retrieval using both computer and manual application of grammatical theories
    Source
    Library science with a slant to documentation and information studies. 32(1995) no.3, S.113-121
    Type
    a
  6. Chomsky, N.: Aspects of the theory of syntax (1965) 0.01
    0.010051633 = product of:
      0.050258167 = sum of:
        0.050258167 = product of:
          0.100516334 = sum of:
            0.100516334 = weight(_text_:22 in 3829) [ClassicSimilarity], result of:
              0.100516334 = score(doc=3829,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.61904186 = fieldWeight in 3829, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    6. 1.1999 10:29:22
  7. Pandey, R.C.: Information retrieval systems : a linguistic approach (1997) 0.01
    0.009269844 = product of:
      0.02317461 = sum of:
        0.00770594 = weight(_text_:a in 133) [ClassicSimilarity], result of:
          0.00770594 = score(doc=133,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14413087 = fieldWeight in 133, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=133)
        0.01546867 = product of:
          0.03093734 = sum of:
            0.03093734 = weight(_text_:information in 133) [ClassicSimilarity], result of:
              0.03093734 = score(doc=133,freq=12.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.38007212 = fieldWeight in 133, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Describes correspondence between linguistics and information retrieval. Notes relevant ideas of linguistics which are useful for information retrieval, particularly at the levels of semantics and syntax. Demonstrates that the conceptual model of Ranganathan based on canons, postulates and pronciples contains the principles expressed by other scholars in the field of information retrieval. Implements Ranganathan's conceptual models in information retrieval tools, using PRECIS as an example. Concludes the Ranganathan models contain all the germinal ideas to meet the challenges of modern technology
    Source
    International information communication and education. 16(1997) no.1, S.14-30
    Type
    a
  8. Lobanov, A.S.: Languages and metalanguages (1993) 0.01
    0.008412599 = product of:
      0.021031497 = sum of:
        0.01155891 = weight(_text_:a in 7269) [ClassicSimilarity], result of:
          0.01155891 = score(doc=7269,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.2161963 = fieldWeight in 7269, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=7269)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 7269) [ClassicSimilarity], result of:
              0.018945174 = score(doc=7269,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 7269, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7269)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Proposes a classification system for metalanguages illustrating the types of linguistics and syntactical relationships involved
    Source
    International forum on information and documentation. 18(1993) no.2, S.3-8
    Type
    a
  9. Kuhlen, R.: Linguistische Grundlagen (1980) 0.01
    0.008234787 = product of:
      0.020586967 = sum of:
        0.009535614 = weight(_text_:a in 3829) [ClassicSimilarity], result of:
          0.009535614 = score(doc=3829,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 3829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=3829)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 3829) [ClassicSimilarity], result of:
              0.022102704 = score(doc=3829,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 3829, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Grundlagen der praktischen Information und Dokumentation: eine Einführung. 2. Aufl
    Type
    a
  10. Hutchins, W.J.: Languages of indexing and classification : a linguistic study of structures and functions (1978) 0.01
    0.007058388 = product of:
      0.01764597 = sum of:
        0.008173384 = weight(_text_:a in 2968) [ClassicSimilarity], result of:
          0.008173384 = score(doc=2968,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 2968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=2968)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 2968) [ClassicSimilarity], result of:
              0.018945174 = score(doc=2968,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 2968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2968)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Series
    Librarianship and information studies; vol.3
  11. Suominen, V.: Linguistic / semiotic conditions of information retrieval / documentation in the light of a sausurean conception of language : 'organising knowledge' or 'communication concerning documents'? (1998) 0.01
    0.005898641 = product of:
      0.014746603 = sum of:
        0.0100103095 = weight(_text_:a in 81) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=81,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 81, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=81)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 81) [ClassicSimilarity], result of:
              0.009472587 = score(doc=81,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 81, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=81)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Argumentation consists of representation of the basic structuralist concepts of language/semiotic as a two-level form, as a form of expression and here especially form of content, and of application of these concepts to the phenomena of the representation of the contents of documents. On the basis of argumentation the paper questions the notion of "organizing knowledge", is it, or in what sense it is possible to organize knowledge. The paper bings out some reservations to viewing content representation as organizing knowledge in a strong sense and suggests that instead could be used a notion of (meta)documentation, characterized as communication concerning documents
    Type
    a
  12. Johnson, F.C.; Paice, C.D.; Black, W.J.; Neal, A.P.: ¬The application of linguistic processing to automatic abstract generation (1993) 0.01
    0.00588199 = product of:
      0.014704974 = sum of:
        0.0068111527 = weight(_text_:a in 2290) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=2290,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 2290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2290)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 2290) [ClassicSimilarity], result of:
              0.015787644 = score(doc=2290,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 2290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2290)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.538-552.
    Type
    a
  13. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.01
    0.0054945312 = product of:
      0.013736328 = sum of:
        0.009829085 = weight(_text_:a in 5089) [ClassicSimilarity], result of:
          0.009829085 = score(doc=5089,freq=34.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1838419 = fieldWeight in 5089, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.003907243 = product of:
          0.007814486 = sum of:
            0.007814486 = weight(_text_:information in 5089) [ClassicSimilarity], result of:
              0.007814486 = score(doc=5089,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0960027 = fieldWeight in 5089, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The 8th International Conference on Conceptual Structures - Logical, Linguistic, and Computational Issues (ICCS 2000) brings together a wide range of researchers and practitioners working with conceptual structures. During the last few years, the ICCS conference series has considerably widened its scope on different kinds of conceptual structures, stimulating research across domain boundaries. We hope that this stimulation is further enhanced by ICCS 2000 joining the long tradition of conferences in Darmstadt with extensive, lively discussions. This volume consists of contributions presented at ICCS 2000, complementing the volume "Conceptual Structures: Logical, Linguistic, and Computational Issues" (B. Ganter, G.W. Mineau (Eds.), LNAI 1867, Springer, Berlin-Heidelberg 2000). It contains submissions reviewed by the program committee, and position papers. We wish to express our appreciation to all the authors of submitted papers, to the general chair, the program chair, the editorial board, the program committee, and to the additional reviewers for making ICCS 2000 a valuable contribution in the knowledge processing research field. Special thanks go to the local organizers for making the conference an enjoyable and inspiring event. We are grateful to Darmstadt University of Technology, the Ernst Schröder Center for Conceptual Knowledge Processing, the Center for Interdisciplinary Studies in Technology, the Deutsche Forschungsgemeinschaft, Land Hessen, and NaviCon GmbH for their generous support
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
  14. Chafe, W.L.: Meaning and the structure of language (1980) 0.00
    0.0043975897 = product of:
      0.021987949 = sum of:
        0.021987949 = product of:
          0.043975897 = sum of:
            0.043975897 = weight(_text_:22 in 220) [ClassicSimilarity], result of:
              0.043975897 = score(doc=220,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2708308 = fieldWeight in 220, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=220)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 4.2007 12:21:29
  15. Miller, G.A.: Wörter : Streifzüge durch die Psycholinguistik (1993) 0.00
    0.0023527963 = product of:
      0.0058819903 = sum of:
        0.002724461 = weight(_text_:a in 1458) [ClassicSimilarity], result of:
          0.002724461 = score(doc=1458,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.050957955 = fieldWeight in 1458, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1458)
        0.003157529 = product of:
          0.006315058 = sum of:
            0.006315058 = weight(_text_:information in 1458) [ClassicSimilarity], result of:
              0.006315058 = score(doc=1458,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0775819 = fieldWeight in 1458, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1458)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Wörter sind der sprachliche Ausdruck unseres Denkens, von uns selbst geschaffen, und doch etwas, das wir selten einer näheren Betrachtung unterziehen. Dabei kann uns gerade diese Betrachtung einiges darüber sagen, was in unseren Gehirnen vor sich geht. Die Sprachforschung hat in den letzten Jahrzehnten durch die Ansätze der Kognitionspsychologie neuen Schwung bekommen - und Georg A. Miller hat als einer der Begründer der modernen Psycholinguistik einen nicht unwesentlichen Anteil daran gehabt. In diesem Buch erzühlt er, oft geürzt mit seinem ganz besonderen Humor, was die Linguistik im Reich der Wörter so alles entdeckt hat. Miller führt dem Leser die verschiedenen Seiten von Wörtern vor Augen; jedes einzelne davon ist das Zusammenspiel einer Äußerung - in der phonetischen Aussprache - , einer Bedeutung - in der Semantik - und einer Rolle im Satz - in der Syntax. Diese drei Seiten sieht Miller als Einheit, wobei er dem Leser die Theorien und Methoden, mit denen die Forschung den Wörtern zu Leibe rückt, anschaulich vorstellt
    Theme
    Information
  16. Köper, B.: Vergleich von ausgewählten Thesaurus-Begriffsfeldern hinsichtlich ihrer linguistischen Relation (1990) 0.00
    0.0022102704 = product of:
      0.011051352 = sum of:
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 39) [ClassicSimilarity], result of:
              0.022102704 = score(doc=39,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 39, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.109375 = fieldNorm(doc=39)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Imprint
    Darmstadt : Fachhochschule, Fachbereich Information und Dokumentation
  17. Hutchins, W.J.: Linguistic processes in the indexing and retrieval of documents (1970) 0.00
    0.0021795689 = product of:
      0.010897844 = sum of:
        0.010897844 = weight(_text_:a in 1306) [ClassicSimilarity], result of:
          0.010897844 = score(doc=1306,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20383182 = fieldWeight in 1306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=1306)
      0.2 = coord(1/5)
    
    Type
    a
  18. Gardin, J.C.: Document analysis and linguistic theory (1973) 0.00
    0.0021795689 = product of:
      0.010897844 = sum of:
        0.010897844 = weight(_text_:a in 2387) [ClassicSimilarity], result of:
          0.010897844 = score(doc=2387,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20383182 = fieldWeight in 2387, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=2387)
      0.2 = coord(1/5)
    
    Type
    a
  19. Chomsky, N.: Logical structure in language (1957) 0.00
    0.0021795689 = product of:
      0.010897844 = sum of:
        0.010897844 = weight(_text_:a in 99) [ClassicSimilarity], result of:
          0.010897844 = score(doc=99,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20383182 = fieldWeight in 99, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=99)
      0.2 = coord(1/5)
    
    Type
    a
  20. Amac, T.: Linguistic context analysis : a new approach to communication evaluation (1997) 0.00
    0.0021624742 = product of:
      0.010812371 = sum of:
        0.010812371 = weight(_text_:a in 2576) [ClassicSimilarity], result of:
          0.010812371 = score(doc=2576,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20223314 = fieldWeight in 2576, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2576)
      0.2 = coord(1/5)
    
    Abstract
    Argues that the integration of computational psycholinguistics can improve corporate communication, and thus become a new strategic tool. An electronic dictionary was created of basic, neutral and negative connotations for nouns, verbs and adjectives appearing in press releases and other communication media, which can be updated with client specific words. The focus on negative messages has the objective of detecting who, why and how publics are criticized, to learn from the vocabulary of opinion leaders and to improve issues management proactively. Suggests a new form of analysis called 'computational linguistic context analysis' (CLCA) by analyzing nominal groups of negative words, rather than monitoring content analysis in the traditional way. Concludes that CLCA can be used to analyze large quantities of press cuttings about a company and could, theoretically, be used to analyze the structure, language and style of a particular journalist to whom it is planned to send a press release or article
    Type
    a

Languages

  • e 25
  • d 10

Types

  • a 23
  • m 11
  • s 3
  • x 1
  • More… Less…

Classifications