Search (3708 results, page 1 of 186)

  1. Murphy, M.L.: Semantic relations and the lexicon : antonymy, synonymy and other paradigms (2008) 0.13
    0.1306864 = product of:
      0.2613728 = sum of:
        0.2613728 = sum of:
          0.22639981 = weight(_text_:lexicon in 997) [ClassicSimilarity], result of:
            0.22639981 = score(doc=997,freq=4.0), product of:
              0.38679156 = queryWeight, product of:
                7.4921947 = idf(docFreq=66, maxDocs=44218)
                0.051625933 = queryNorm
              0.5853277 = fieldWeight in 997, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                7.4921947 = idf(docFreq=66, maxDocs=44218)
                0.0390625 = fieldNorm(doc=997)
          0.034973007 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
            0.034973007 = score(doc=997,freq=2.0), product of:
              0.18078522 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051625933 = queryNorm
              0.19345059 = fieldWeight in 997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=997)
      0.5 = coord(1/2)
    
    Abstract
    Semantic Relations and the Lexicon explores the many paradigmatic semantic relations between words, such as synonymy, antonymy and hyponymy, and their relevance to the mental organization of our vocabularies. Drawing on a century's research in linguistics, psychology, philosophy, anthropology and computer science, M. Lynne Murphy proposes a pragmatic approach to these relations. Whereas traditional approaches have claimed that paradigmatic relations are part of our lexical knowledge, Dr Murphy argues that they constitute metalinguistic knowledge, which can be derived through a single relational principle, and may also be stored as part of our extra-lexical, conceptual representations of a word. Part I shows how this approach can account for the properties of lexical relations in ways that traditional approaches cannot, and Part II examines particular relations in detail. This book will serve as an informative handbook for all linguists and cognitive scientists interested in the mental representation of vocabulary.
    Date
    22. 7.2013 10:53:30
  2. Ahlswede, T.; Evens, M.: Generating a relational lexicon from a machine-readable dictionary (1988) 0.11
    0.112062186 = product of:
      0.22412437 = sum of:
        0.22412437 = product of:
          0.44824874 = sum of:
            0.44824874 = weight(_text_:lexicon in 870) [ClassicSimilarity], result of:
              0.44824874 = score(doc=870,freq=2.0), product of:
                0.38679156 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.051625933 = queryNorm
                1.1588897 = fieldWeight in 870, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.109375 = fieldNorm(doc=870)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. De Luca, E.W.; Dahlberg, I.: Including knowledge domains from the ICC into the multilingual lexical linked data cloud (2014) 0.10
    0.10477407 = product of:
      0.20954815 = sum of:
        0.20954815 = sum of:
          0.16008884 = weight(_text_:lexicon in 1493) [ClassicSimilarity], result of:
            0.16008884 = score(doc=1493,freq=2.0), product of:
              0.38679156 = queryWeight, product of:
                7.4921947 = idf(docFreq=66, maxDocs=44218)
                0.051625933 = queryNorm
              0.41388917 = fieldWeight in 1493, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.4921947 = idf(docFreq=66, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1493)
          0.0494593 = weight(_text_:22 in 1493) [ClassicSimilarity], result of:
            0.0494593 = score(doc=1493,freq=4.0), product of:
              0.18078522 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051625933 = queryNorm
              0.27358043 = fieldWeight in 1493, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1493)
      0.5 = coord(1/2)
    
    Abstract
    A lot of information that is already available on the Web, or retrieved from local information systems and social networks is structured in data silos that are not semantically related. Semantic technologies make it emerge that the use of typed links that directly express their relations are an advantage for every application that can reuse the incorporated knowledge about the data. For this reason, data integration, through reengineering (e.g. triplify), or querying (e.g. D2R) is an important task in order to make information available for everyone. Thus, in order to build a semantic map of the data, we need knowledge about data items itself and the relation between heterogeneous data items. In this paper, we present our work of providing Lexical Linked Data (LLD) through a meta-model that contains all the resources and gives the possibility to retrieve and navigate them from different perspectives. We combine the existing work done on knowledge domains (based on the Information Coding Classification) within the Multilingual Lexical Linked Data Cloud (based on the RDF/OWL EurowordNet and the related integrated lexical resources (MultiWordNet, EuroWordNet, MEMODATA Lexicon, Hamburg Methaphor DB).
    Date
    22. 9.2014 19:01:18
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  4. Murphy, M.L.: Lexical meaning (2010) 0.10
    0.10454913 = product of:
      0.20909826 = sum of:
        0.20909826 = sum of:
          0.18111986 = weight(_text_:lexicon in 998) [ClassicSimilarity], result of:
            0.18111986 = score(doc=998,freq=4.0), product of:
              0.38679156 = queryWeight, product of:
                7.4921947 = idf(docFreq=66, maxDocs=44218)
                0.051625933 = queryNorm
              0.46826217 = fieldWeight in 998, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                7.4921947 = idf(docFreq=66, maxDocs=44218)
                0.03125 = fieldNorm(doc=998)
          0.027978405 = weight(_text_:22 in 998) [ClassicSimilarity], result of:
            0.027978405 = score(doc=998,freq=2.0), product of:
              0.18078522 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051625933 = queryNorm
              0.15476047 = fieldWeight in 998, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=998)
      0.5 = coord(1/2)
    
    Content
    Inhalt: Machine generated contents note: Part I. Meaning and the Lexicon: 1. The lexicon - some preliminaries; 2. What do we mean by meaning?; 3. Components and prototypes; 4. Modern componential approaches - and some alternatives; Part II. Relations Among Words and Senses: 5. Meaning variation: polysemy, homonymy and vagueness; 6. Lexical and semantic relations; Part III. Word Classes and Semantic Types: 7. Ontological categories and word classes; 8. Nouns and countability; 9. Predication: verbs, events, and states; 10. Verbs and time; 11. Adjectives and properties.
    Date
    22. 7.2013 10:53:30
  5. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10297948 = sum of:
      0.08199567 = product of:
        0.24598701 = sum of:
          0.24598701 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24598701 = score(doc=562,freq=2.0), product of:
              0.43768525 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.051625933 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020983804 = product of:
        0.041967608 = sum of:
          0.041967608 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041967608 = score(doc=562,freq=2.0), product of:
              0.18078522 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051625933 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  6. Automating the lexicon : research and practice in a multilingual environment (1995) 0.10
    0.09605331 = product of:
      0.19210662 = sum of:
        0.19210662 = product of:
          0.38421324 = sum of:
            0.38421324 = weight(_text_:lexicon in 6431) [ClassicSimilarity], result of:
              0.38421324 = score(doc=6431,freq=2.0), product of:
                0.38679156 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.051625933 = queryNorm
                0.99333405 = fieldWeight in 6431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6431)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Carterette, B.; Can, F.: Comparing inverted files and signature files for searching a large lexicon (2005) 0.10
    0.09605331 = product of:
      0.19210662 = sum of:
        0.19210662 = product of:
          0.38421324 = sum of:
            0.38421324 = weight(_text_:lexicon in 1029) [ClassicSimilarity], result of:
              0.38421324 = score(doc=1029,freq=8.0), product of:
                0.38679156 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.051625933 = queryNorm
                0.99333405 = fieldWeight in 1029, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1029)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Signature files and inverted files are well-known index structures. In this paper we undertake a direct comparison of the two for searching for partially-specified queries in a large lexicon stored in main memory. Using n-grams to index lexicon terms, a bit-sliced signature file can be compressed to a smaller size than an inverted file if each n-gram sets only one bit in the term signature. With a signature width less than half the number of unique n-grams in the lexicon, the signature file method is about as fast as the inverted file method, and significantly smaller. Greater flexibility in memory usage and faster index generation time make signature files appropriate for searching large lexicons or other collections in an environment where memory is at a premium.
  8. Snajder, J.; Dalbelo Basic, B.D.; Tadic, M.: Automatic acquisition of inflectional lexica for morphological normalisation (2008) 0.10
    0.09605331 = product of:
      0.19210662 = sum of:
        0.19210662 = product of:
          0.38421324 = sum of:
            0.38421324 = weight(_text_:lexicon in 2910) [ClassicSimilarity], result of:
              0.38421324 = score(doc=2910,freq=8.0), product of:
                0.38679156 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.051625933 = queryNorm
                0.99333405 = fieldWeight in 2910, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2910)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Due to natural language morphology, words can take on various morphological forms. Morphological normalisation - often used in information retrieval and text mining systems - conflates morphological variants of a word to a single representative form. In this paper, we describe an approach to lexicon-based inflectional normalisation. This approach is in between stemming and lemmatisation, and is suitable for morphological normalisation of inflectionally complex languages. To eliminate the immense effort required to compile the lexicon by hand, we focus on the problem of acquiring automatically an inflectional morphological lexicon from raw corpora. We propose a convenient and highly expressive morphology representation formalism on which the acquisition procedure is based. Our approach is applied to the morphologically complex Croatian language, but it should be equally applicable to other languages of similar morphological complexity. Experimental results show that our approach can be used to acquire a lexicon whose linguistic quality allows for rather good normalisation performance.
  9. Sedelow, S.Y.: Formally modeling and extending whole-language-scale semantic space (1993) 0.09
    0.09055993 = product of:
      0.18111986 = sum of:
        0.18111986 = product of:
          0.36223972 = sum of:
            0.36223972 = weight(_text_:lexicon in 7096) [ClassicSimilarity], result of:
              0.36223972 = score(doc=7096,freq=4.0), product of:
                0.38679156 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.051625933 = queryNorm
                0.93652433 = fieldWeight in 7096, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7096)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    For the analysis of continuous discourse in a wide range of corpora, it is essential both to model and to expand whole-language lexical resources (e.g., Roget's International Thesaurus). Rapidly extensible lexicons are of interest as special-domain extensions to a whole-language lexicon. The presentation argues for the validity of this approach, wtih specific reference to a viable, conceptual, whole-language, foundational lexicon, Roget's International Thesaurus, 1962
  10. Perera, P.; Witte, R.: ¬A self-learning context-aware lemmatizer for German (2005) 0.09
    0.09055993 = product of:
      0.18111986 = sum of:
        0.18111986 = product of:
          0.36223972 = sum of:
            0.36223972 = weight(_text_:lexicon in 4638) [ClassicSimilarity], result of:
              0.36223972 = score(doc=4638,freq=4.0), product of:
                0.38679156 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.051625933 = queryNorm
                0.93652433 = fieldWeight in 4638, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4638)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Accurate lemmatization of German nouns mandates the use of a lexicon. Comprehensive lexicons, however, are expensive to build and maintain. We present a self-learning lemmatizer capable of automatically creating a full-form lexicon by processing German documents.
  11. Fachsystematik Bremen nebst Schlüssel 1970 ff. (1970 ff) 0.09
    0.085816234 = sum of:
      0.06832973 = product of:
        0.20498918 = sum of:
          0.20498918 = weight(_text_:3a in 3577) [ClassicSimilarity], result of:
            0.20498918 = score(doc=3577,freq=2.0), product of:
              0.43768525 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.051625933 = queryNorm
              0.46834838 = fieldWeight in 3577, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3577)
        0.33333334 = coord(1/3)
      0.017486503 = product of:
        0.034973007 = sum of:
          0.034973007 = weight(_text_:22 in 3577) [ClassicSimilarity], result of:
            0.034973007 = score(doc=3577,freq=2.0), product of:
              0.18078522 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051625933 = queryNorm
              0.19345059 = fieldWeight in 3577, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3577)
        0.5 = coord(1/2)
    
    Content
    1. Agrarwissenschaften 1981. - 3. Allgemeine Geographie 2.1972. - 3a. Allgemeine Naturwissenschaften 1.1973. - 4. Allgemeine Sprachwissenschaft, Allgemeine Literaturwissenschaft 2.1971. - 6. Allgemeines. 5.1983. - 7. Anglistik 3.1976. - 8. Astronomie, Geodäsie 4.1977. - 12. bio Biologie, bcp Biochemie-Biophysik, bot Botanik, zoo Zoologie 1981. - 13. Bremensien 3.1983. - 13a. Buch- und Bibliothekswesen 3.1975. - 14. Chemie 4.1977. - 14a. Elektrotechnik 1974. - 15 Ethnologie 2.1976. - 16,1. Geowissenschaften. Sachteil 3.1977. - 16,2. Geowissenschaften. Regionaler Teil 3.1977. - 17. Germanistik 6.1984. - 17a,1. Geschichte. Teilsystematik hil. - 17a,2. Geschichte. Teilsystematik his Neuere Geschichte. - 17a,3. Geschichte. Teilsystematik hit Neueste Geschichte. - 18. Humanbiologie 2.1983. - 19. Ingenieurwissenschaften 1974. - 20. siehe 14a. - 21. klassische Philologie 3.1977. - 22. Klinische Medizin 1975. - 23. Kunstgeschichte 2.1971. - 24. Kybernetik. 2.1975. - 25. Mathematik 3.1974. - 26. Medizin 1976. - 26a. Militärwissenschaft 1985. - 27. Musikwissenschaft 1978. - 27a. Noten 2.1974. - 28. Ozeanographie 3.1977. -29. Pädagogik 8.1985. - 30. Philosphie 3.1974. - 31. Physik 3.1974. - 33. Politik, Politische Wissenschaft, Sozialwissenschaft. Soziologie. Länderschlüssel. Register 1981. - 34. Psychologie 2.1972. - 35. Publizistik und Kommunikationswissenschaft 1985. - 36. Rechtswissenschaften 1986. - 37. Regionale Geograpgie 3.1975. - 37a. Religionswissenschaft 1970. - 38. Romanistik 3.1976. - 39. Skandinavistik 4.1985. - 40. Slavistik 1977. - 40a. Sonstige Sprachen und Literaturen 1973. - 43. Sport 4.1983. - 44. Theaterwissenschaft 1985. - 45. Theologie 2.1976. - 45a. Ur- und Frühgeschichte, Archäologie 1970. - 47. Volkskunde 1976. - 47a. Wirtschaftswissenschaften 1971 // Schlüssel: 1. Länderschlüssel 1971. - 2. Formenschlüssel (Kurzform) 1974. - 3. Personenschlüssel Literatur 5. Fassung 1968
  12. Dahlberg, I: ¬A systematic new lexicon of all knowledge fields based on the Information Coding Classification (2012) 0.08
    0.08318461 = product of:
      0.16636921 = sum of:
        0.16636921 = product of:
          0.33273843 = sum of:
            0.33273843 = weight(_text_:lexicon in 81) [ClassicSimilarity], result of:
              0.33273843 = score(doc=81,freq=6.0), product of:
                0.38679156 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.051625933 = queryNorm
                0.86025256 = fieldWeight in 81, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.046875 = fieldNorm(doc=81)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A new lexicon of all knowledge fields in the German language with the terms of the fields in English is under preparation. The article is meant to provide an idea of its genesis and its structure. It will, of course, also contain an alphabetical arrangement of entries. The structure is provided by the Information Coding Classification (ICC), which is a theory-based, faceted universal classification system of knowledge fields. Section (1) outlines (1) its early history (1970-77). Section (2) discusses its twelve principles regarding concepts, conceptual relationships, and notation; its 9 main object area classes arranged on integrative levels, and its systematic digital schedule with its systematizer, offering 9 subdividing aspects. It shows possible links with other systems, as well as the system's assets for interdisciplinarity and transdisciplinarity. Providing concrete examples, section (3) describes the contents of the nine levels, section (4) delineates some issues of subject group/domain construction, and section (5) clarifies the lexicon entries.
  13. Thelwall, M.; Buckley, K.: Topic-based sentiment analysis for the social web : the role of mood and issue-related words (2013) 0.08
    0.08318461 = product of:
      0.16636921 = sum of:
        0.16636921 = product of:
          0.33273843 = sum of:
            0.33273843 = weight(_text_:lexicon in 1004) [ClassicSimilarity], result of:
              0.33273843 = score(doc=1004,freq=6.0), product of:
                0.38679156 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.051625933 = queryNorm
                0.86025256 = fieldWeight in 1004, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1004)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    General sentiment analysis for the social web has become increasingly useful for shedding light on the role of emotion in online communication and offline events in both academic research and data journalism. Nevertheless, existing general-purpose social web sentiment analysis algorithms may not be optimal for texts focussed around specific topics. This article introduces 2 new methods, mood setting and lexicon extension, to improve the accuracy of topic-specific lexical sentiment strength detection for the social web. Mood setting allows the topic mood to determine the default polarity for ostensibly neutral expressive text. Topic-specific lexicon extension involves adding topic-specific words to the default general sentiment lexicon. Experiments with 8 data sets show that both methods can improve sentiment analysis performance in corpora and are recommended when the topic focus is tightest.
  14. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.08
    0.08199567 = product of:
      0.16399135 = sum of:
        0.16399135 = product of:
          0.49197403 = sum of:
            0.49197403 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.49197403 = score(doc=973,freq=2.0), product of:
                0.43768525 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051625933 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  15. LexiCon : neues großes Lexikon in Farbe (1996) 0.08
    0.08004442 = product of:
      0.16008884 = sum of:
        0.16008884 = product of:
          0.32017767 = sum of:
            0.32017767 = weight(_text_:lexicon in 5384) [ClassicSimilarity], result of:
              0.32017767 = score(doc=5384,freq=2.0), product of:
                0.38679156 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.051625933 = queryNorm
                0.82777834 = fieldWeight in 5384, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5384)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Digitale Bibliotheken : Ein Multimedia-Feuerwerk versprechen CD-ROM-Lexika. TOMORROW hat sie begutachtet (1999) 0.08
    0.08004442 = product of:
      0.16008884 = sum of:
        0.16008884 = product of:
          0.32017767 = sum of:
            0.32017767 = weight(_text_:lexicon in 3414) [ClassicSimilarity], result of:
              0.32017767 = score(doc=3414,freq=2.0), product of:
                0.38679156 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.051625933 = queryNorm
                0.82777834 = fieldWeight in 3414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3414)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Object
    LexiCon
  17. Bergler, S.: Generative lexicon principles for machine translation : a case for meta-lexical structure (1994/95) 0.08
    0.079239935 = product of:
      0.15847987 = sum of:
        0.15847987 = product of:
          0.31695974 = sum of:
            0.31695974 = weight(_text_:lexicon in 4072) [ClassicSimilarity], result of:
              0.31695974 = score(doc=4072,freq=4.0), product of:
                0.38679156 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.051625933 = queryNorm
                0.8194588 = fieldWeight in 4072, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4072)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Addresses 2 types of mismatches in the translation of reported speech between German and English. The 1st mismatch is between the repeated use of the reported speech construction in English and the use of subjunctive in German used to indicate continued attribution. The 2nd mismatch concerns the difference in usage of metonymic extensions in ths subject position of reported speech. Presents examples showing the different styles of reporting the utterances. One key feature of the proposed lexicon is a meta lexical organization of basic word entries, which is shown to facilitate the translation process. Contrasts notions of lexical structure with different recent proposals in machine translation
  18. Müller, T.; Neth, H.: Wissenswust : Multimedia-Enzyklopädien auf CD-ROM (1996) 0.08
    0.079239935 = product of:
      0.15847987 = sum of:
        0.15847987 = product of:
          0.31695974 = sum of:
            0.31695974 = weight(_text_:lexicon in 4878) [ClassicSimilarity], result of:
              0.31695974 = score(doc=4878,freq=4.0), product of:
                0.38679156 = queryWeight, product of:
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.051625933 = queryNorm
                0.8194588 = fieldWeight in 4878, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.4921947 = idf(docFreq=66, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4878)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vorgestellt werden: Kompakt Brockhaus Multimedial; Bertelsmann Universallexikon 1996; Bertelsmann Discovery 1995/96; Knaurs Lexikon von A bis Z 4.3b; Micrsoft Home LexiROM; LexiCon; Data Becker Lexikon 1.0e; Compton's Interactive Encyclopedia 1996; Grolier Multimedia Encyclopedia 1996; Hutchinson Multimedia Encyclopedia 1995; Microsoft Encarta 96; InfoPedia 2.0; Encyclopaedia Britannica CD 2.02
    Object
    LexiCon
  19. #220 0.07
    0.06924302 = product of:
      0.13848604 = sum of:
        0.13848604 = product of:
          0.2769721 = sum of:
            0.2769721 = weight(_text_:22 in 219) [ClassicSimilarity], result of:
              0.2769721 = score(doc=219,freq=4.0), product of:
                0.18078522 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051625933 = queryNorm
                1.5320505 = fieldWeight in 219, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.21875 = fieldNorm(doc=219)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1998 20:02:22
  20. #1387 0.07
    0.06924302 = product of:
      0.13848604 = sum of:
        0.13848604 = product of:
          0.2769721 = sum of:
            0.2769721 = weight(_text_:22 in 1386) [ClassicSimilarity], result of:
              0.2769721 = score(doc=1386,freq=4.0), product of:
                0.18078522 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051625933 = queryNorm
                1.5320505 = fieldWeight in 1386, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.21875 = fieldNorm(doc=1386)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1998 20:02:22

Languages

Types

  • a 3100
  • m 347
  • el 165
  • s 143
  • b 39
  • x 36
  • i 24
  • r 17
  • ? 8
  • p 4
  • d 3
  • n 3
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications