Search (4741 results, page 1 of 238)

  1. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.18
    0.18483627 = product of:
      0.36967254 = sum of:
        0.36967254 = sum of:
          0.25564238 = weight(_text_:word in 3164) [ClassicSimilarity], result of:
            0.25564238 = score(doc=3164,freq=2.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.81102574 = fieldWeight in 3164, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.109375 = fieldNorm(doc=3164)
          0.11403016 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
            0.11403016 = score(doc=3164,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.5416616 = fieldWeight in 3164, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=3164)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  2. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.18
    0.18483627 = product of:
      0.36967254 = sum of:
        0.36967254 = sum of:
          0.25564238 = weight(_text_:word in 3117) [ClassicSimilarity], result of:
            0.25564238 = score(doc=3117,freq=2.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.81102574 = fieldWeight in 3117, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.109375 = fieldNorm(doc=3117)
          0.11403016 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
            0.11403016 = score(doc=3117,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.5416616 = fieldWeight in 3117, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=3117)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  3. Weinberg, B.H.: Index structures in early Hebrew Biblical word lists : preludes to the first Latin concordances (2004) 0.18
    0.18483627 = product of:
      0.36967254 = sum of:
        0.36967254 = sum of:
          0.25564238 = weight(_text_:word in 4180) [ClassicSimilarity], result of:
            0.25564238 = score(doc=4180,freq=2.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.81102574 = fieldWeight in 4180, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.109375 = fieldNorm(doc=4180)
          0.11403016 = weight(_text_:22 in 4180) [ClassicSimilarity], result of:
            0.11403016 = score(doc=4180,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.5416616 = fieldWeight in 4180, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=4180)
      0.5 = coord(1/2)
    
    Date
    17.10.2005 13:54:22
  4. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.18
    0.17937772 = product of:
      0.35875544 = sum of:
        0.35875544 = sum of:
          0.30988538 = weight(_text_:word in 563) [ClassicSimilarity], result of:
            0.30988538 = score(doc=563,freq=16.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.9831117 = fieldWeight in 563, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.048870068 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.048870068 = score(doc=563,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.5 = coord(1/2)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Date
    10. 1.2013 19:22:47
  5. International Conference on Terminology Science and Terminology Planning : In commemoration of E. Drezen (1892-1992), Riga, 17-19 Aug. 1992 (1994) 0.13
    0.13416535 = sum of:
      0.0437821 = product of:
        0.1313463 = sum of:
          0.1313463 = weight(_text_:objects in 4570) [ClassicSimilarity], result of:
            0.1313463 = score(doc=4570,freq=2.0), product of:
              0.31952566 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.06011691 = queryNorm
              0.41106653 = fieldWeight in 4570, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4570)
        0.33333334 = coord(1/3)
      0.09038324 = product of:
        0.18076648 = sum of:
          0.18076648 = weight(_text_:word in 4570) [ClassicSimilarity], result of:
            0.18076648 = score(doc=4570,freq=4.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.5734818 = fieldWeight in 4570, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4570)
        0.5 = coord(1/2)
    
    Content
    Enthält die Abschnitte: (1) Terminology and philosophy of science; (2) Terminological theories and their development; (3) Terminology planning and internationalization of terminology; (4) Comparative studies in terminology; (5) Terminography; (6) Theoretical issues of terminology science. Enthält u.a. die Beiträge: OESER, E.: Terminology and philosophy of science; SLODZIAN, M.: Terminology theory and philosophy of science, CEVERE, R., I. GREITANE u. A. SPEKTORS: The problems of word classification in the formation of thematic word stock in automated terminological dictionaries; FELBER, H.: A relational model: objects, concepts, terms; PICHT, H.: On object and concept representation with focus on non-verbal forms of representation; TOFT, B.: Conceptual relations in terminology and knowledge engineering
  6. Hajra, A. et al.: Enriching scientific publications from LOD repositories through word embeddings approach (2016) 0.13
    0.13202591 = product of:
      0.26405182 = sum of:
        0.26405182 = sum of:
          0.1826017 = weight(_text_:word in 3281) [ClassicSimilarity], result of:
            0.1826017 = score(doc=3281,freq=2.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.5793041 = fieldWeight in 3281, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.078125 = fieldNorm(doc=3281)
          0.08145012 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
            0.08145012 = score(doc=3281,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.38690117 = fieldWeight in 3281, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3281)
      0.5 = coord(1/2)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  7. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.13
    0.13054328 = sum of:
      0.053071927 = product of:
        0.15921578 = sum of:
          0.15921578 = weight(_text_:objects in 3884) [ClassicSimilarity], result of:
            0.15921578 = score(doc=3884,freq=4.0), product of:
              0.31952566 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.06011691 = queryNorm
              0.49828792 = fieldWeight in 3884, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=3884)
        0.33333334 = coord(1/3)
      0.077471346 = product of:
        0.15494269 = sum of:
          0.15494269 = weight(_text_:word in 3884) [ClassicSimilarity], result of:
            0.15494269 = score(doc=3884,freq=4.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.49155584 = fieldWeight in 3884, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.046875 = fieldNorm(doc=3884)
        0.5 = coord(1/2)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
  8. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.13
    0.1269095 = sum of:
      0.06365439 = product of:
        0.19096316 = sum of:
          0.19096316 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
            0.19096316 = score(doc=5820,freq=2.0), product of:
              0.5096718 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.06011691 = queryNorm
              0.3746787 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.33333334 = coord(1/3)
      0.063255094 = product of:
        0.12651019 = sum of:
          0.12651019 = weight(_text_:word in 5820) [ClassicSimilarity], result of:
            0.12651019 = score(doc=5820,freq=6.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.4013537 = fieldWeight in 5820, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.5 = coord(1/2)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  9. Cai, X.; Li, W.: Enhancing sentence-level clustering with integrated and interactive frameworks for theme-based summarization (2011) 0.12
    0.122573785 = sum of:
      0.031272933 = product of:
        0.0938188 = sum of:
          0.0938188 = weight(_text_:objects in 4770) [ClassicSimilarity], result of:
            0.0938188 = score(doc=4770,freq=2.0), product of:
              0.31952566 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.06011691 = queryNorm
              0.29361898 = fieldWeight in 4770, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4770)
        0.33333334 = coord(1/3)
      0.09130085 = product of:
        0.1826017 = sum of:
          0.1826017 = weight(_text_:word in 4770) [ClassicSimilarity], result of:
            0.1826017 = score(doc=4770,freq=8.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.5793041 = fieldWeight in 4770, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4770)
        0.5 = coord(1/2)
    
    Abstract
    Sentence clustering plays a pivotal role in theme-based summarization, which discovers topic themes defined as the clusters of highly related sentences to avoid redundancy and cover more diverse information. As the length of sentences is short and the content it contains is limited, the bag-of-words cosine similarity traditionally used for document clustering is no longer suitable. Special treatment for measuring sentence similarity is necessary. In this article, we study the sentence-level clustering problem. After exploiting concept- and context-enriched sentence vector representations, we develop two co-clustering frameworks to enhance sentence-level clustering for theme-based summarization-integrated clustering and interactive clustering-both allowing word and document to play an explicit role in sentence clustering as independent text objects rather than using word or concept as features of a sentence in a document set. In each framework, we experiment with two-level co-clustering (i.e., sentence-word co-clustering or sentence-document co-clustering) and three-level co-clustering (i.e., document-sentence-word co-clustering). Compared against concept- and context-oriented sentence-representation reformation, co-clustering shows a clear advantage in both intrinsic clustering quality evaluation and extrinsic summarization evaluation conducted on the Document Understanding Conferences (DUC) datasets.
  10. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.12
    0.119916625 = sum of:
      0.09548159 = product of:
        0.28644475 = sum of:
          0.28644475 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.28644475 = score(doc=562,freq=2.0), product of:
              0.5096718 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.06011691 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.024435034 = product of:
        0.048870068 = sum of:
          0.048870068 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.048870068 = score(doc=562,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  11. Murphy, M.L.: Lexical meaning (2010) 0.12
    0.11958516 = product of:
      0.23917031 = sum of:
        0.23917031 = sum of:
          0.20659027 = weight(_text_:word in 998) [ClassicSimilarity], result of:
            0.20659027 = score(doc=998,freq=16.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.6554078 = fieldWeight in 998, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.03125 = fieldNorm(doc=998)
          0.032580048 = weight(_text_:22 in 998) [ClassicSimilarity], result of:
            0.032580048 = score(doc=998,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.15476047 = fieldWeight in 998, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=998)
      0.5 = coord(1/2)
    
    Abstract
    The ideal introduction for students of semantics, Lexical Meaning fills the gap left by more general semantics textbooks, providing the teacher and the student with insights into word meaning beyond the traditional overviews of lexical relations. The book explores the relationship between word meanings and syntax and semantics more generally. It provides a balanced overview of the main theoretical approaches, along with a lucid explanation of their relative strengths and weaknesses. After covering the main topics in lexical meaning, such as polysemy and sense relations, the textbook surveys the types of meanings represented by different word classes. It explains abstract concepts in clear language, using a wide range of examples, and includes linguistic puzzles in each chapter to encourage the student to practise using the concepts. 'Adopt-a-Word' exercises give students the chance to research a particular word, building a portfolio of specialist work on a single word.
    Content
    Inhalt: Machine generated contents note: Part I. Meaning and the Lexicon: 1. The lexicon - some preliminaries; 2. What do we mean by meaning?; 3. Components and prototypes; 4. Modern componential approaches - and some alternatives; Part II. Relations Among Words and Senses: 5. Meaning variation: polysemy, homonymy and vagueness; 6. Lexical and semantic relations; Part III. Word Classes and Semantic Types: 7. Ontological categories and word classes; 8. Nouns and countability; 9. Predication: verbs, events, and states; 10. Verbs and time; 11. Adjectives and properties.
    Date
    22. 7.2013 10:53:30
  12. Localist connectionist approaches to human cognition (1998) 0.12
    0.11931767 = product of:
      0.23863535 = sum of:
        0.23863535 = sum of:
          0.18976527 = weight(_text_:word in 3774) [ClassicSimilarity], result of:
            0.18976527 = score(doc=3774,freq=6.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.6020305 = fieldWeight in 3774, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.046875 = fieldNorm(doc=3774)
          0.048870068 = weight(_text_:22 in 3774) [ClassicSimilarity], result of:
            0.048870068 = score(doc=3774,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.23214069 = fieldWeight in 3774, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3774)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: GRAINGER, J. u. A.M. JACOBS: On localist connectionism and psychological science; HOUGHTON, G. u. S.P. TIPPER: A model of selective attention as a mechanism of cognitive model; BURTON, A.M.: A model of human face recognition; FRAUENFELDER, U.H. u. G. PEETERS: Simulating the time course of spoken word recognition: an analysis of lexical competition in TRACE; JACOBS, A.M. u.a.: MROM-p: an interactive activation, multiple readabout model of orthographic and phonological processes in visual word recognition; DIJKSTRA, T. u. W.J.B. van HEUVEN: The BIA model and bilingual word recognition; PAGE, M. u. D. NORRIS: Modeling immediate serial recall with a localist implementation of the primacy model; SCHADE, U. u. H.-J. EIKMEYER: Modeling the production of object spectifications; GOLDSTONE, R.L.: Hanging together: a connectionist model of similarity; MYUNG, J. u. A.A. PITT: Issues in selecting mathematical models of cognition
    Date
    1. 6.1999 19:50:22
  13. Rahmstorf, G.: Concept structures for large vocabularies (1998) 0.12
    0.11931767 = product of:
      0.23863535 = sum of:
        0.23863535 = sum of:
          0.18976527 = weight(_text_:word in 75) [ClassicSimilarity], result of:
            0.18976527 = score(doc=75,freq=6.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.6020305 = fieldWeight in 75, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.046875 = fieldNorm(doc=75)
          0.048870068 = weight(_text_:22 in 75) [ClassicSimilarity], result of:
            0.048870068 = score(doc=75,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.23214069 = fieldWeight in 75, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=75)
      0.5 = coord(1/2)
    
    Abstract
    A technology is described which supports the acquisition, visualisation and manipulation of large vocabularies with associated structures. It is used for dictionary production, terminology data bases, thesauri, library classification systems etc. Essential features of the technology are a lexicographic user interface, variable word description, unlimited list of word readings, a concept language, automatic transformations of formulas into graphic structures, structure manipulation operations and retransformation into formulas. The concept language includes notations for undefined concepts. The structure of defined concepts can be constructed interactively. The technology supports the generation of large vocabularies with structures representing word senses. Concept structures and ordering systems for indexing and retrieval can be constructed separately and connected by associating relations.
    Date
    30.12.2001 19:01:22
  14. Arsenault, C.; Ménard, E.: Searching titles with initial articles in library catalogs : a case study and search behavior analysis (2007) 0.12
    0.11931767 = product of:
      0.23863535 = sum of:
        0.23863535 = sum of:
          0.18976527 = weight(_text_:word in 2264) [ClassicSimilarity], result of:
            0.18976527 = score(doc=2264,freq=6.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.6020305 = fieldWeight in 2264, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.046875 = fieldNorm(doc=2264)
          0.048870068 = weight(_text_:22 in 2264) [ClassicSimilarity], result of:
            0.048870068 = score(doc=2264,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.23214069 = fieldWeight in 2264, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2264)
      0.5 = coord(1/2)
    
    Abstract
    This study examines problems caused by initial articles in library catalogs. The problematic records observed are those whose titles begin with a word erroneously considered to be an article at the retrieval stage. Many retrieval algorithms edit queries by removing initial words corresponding to articles found in an exclusion list even whether the initial word is an article or not. Consequently, a certain number of documents remain more difficult to find. The study also examines user behavior during known-item retrieval using the title index in library catalogs, concentrating on the problems caused by the presence of an initial article or of a word homograph to an article. Measures of success and effectiveness are taken to determine if retrieval is affected in such cases.
    Date
    10. 9.2000 17:38:22
  15. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.12
    0.11889078 = product of:
      0.23778155 = sum of:
        0.23778155 = sum of:
          0.18076648 = weight(_text_:word in 2673) [ClassicSimilarity], result of:
            0.18076648 = score(doc=2673,freq=4.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.5734818 = fieldWeight in 2673, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2673)
          0.05701508 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
            0.05701508 = score(doc=2673,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.2708308 = fieldWeight in 2673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2673)
      0.5 = coord(1/2)
    
    Abstract
    Examines techniques that discover features in sets of pre-categorized documents, such that similar documents can be found on the WWW. Examines techniques which will classifiy training examples with high accuracy, then explains why this is not necessarily useful. Describes a method for extracting word clusters from the raw document features. Results show that the clustering technique is successful in discovering word groups in personal Web pages which can be used to find similar information on the WWW
    Date
    1. 8.1996 22:08:06
  16. Lund, K.; Burgess, C.; Atchley, R.A.: Semantic and associative priming in high-dimensional semantic space (1995) 0.12
    0.11889078 = product of:
      0.23778155 = sum of:
        0.23778155 = sum of:
          0.18076648 = weight(_text_:word in 2151) [ClassicSimilarity], result of:
            0.18076648 = score(doc=2151,freq=4.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.5734818 = fieldWeight in 2151, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2151)
          0.05701508 = weight(_text_:22 in 2151) [ClassicSimilarity], result of:
            0.05701508 = score(doc=2151,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.2708308 = fieldWeight in 2151, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2151)
      0.5 = coord(1/2)
    
    Abstract
    We present a model of semantic memory that utilizes a high dimensional semantic space constructed from a co-occurrence matrix. This matrix was formed by analyzing a lot) million word corpus. Word vectors were then obtained by extracting rows and columns of this matrix, These vectors were subjected to multidimensional scaling. Words were found to cluster semantically. suggesting that interword distance may be interpretable as a measure of semantic similarity, In attempting to replicate with our simulation the semantic and ...
    Source
    Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society: July 22 - 25, 1995, University of Pittsburgh / ed. by Johanna D. Moore and Jill Fain Lehmann
  17. Dick, S.J.: Astronomy's Three Kingdom System : a comprehensive classification system of celestial objects (2019) 0.12
    0.11607174 = sum of:
      0.0875642 = product of:
        0.2626926 = sum of:
          0.2626926 = weight(_text_:objects in 5455) [ClassicSimilarity], result of:
            0.2626926 = score(doc=5455,freq=8.0), product of:
              0.31952566 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.06011691 = queryNorm
              0.82213306 = fieldWeight in 5455, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5455)
        0.33333334 = coord(1/3)
      0.02850754 = product of:
        0.05701508 = sum of:
          0.05701508 = weight(_text_:22 in 5455) [ClassicSimilarity], result of:
            0.05701508 = score(doc=5455,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.2708308 = fieldWeight in 5455, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5455)
        0.5 = coord(1/2)
    
    Abstract
    Although classification has been an important aspect of astronomy since stellar spectroscopy in the late nineteenth century, to date no comprehensive classification system has existed for all classes of objects in the universe. Here we present such a system, and lay out its foundational definitions and principles. The system consists of the "Three Kingdoms" of planets, stars and galaxies, eighteen families, and eighty-two classes of objects. Gravitation is the defining organizing principle for the families and classes, and the physical nature of the objects is the defining characteristic of the classes. The system should prove useful for both scientific and pedagogical purposes.
    Date
    21.11.2019 18:46:22
  18. Garcés, P.J.; Olivas, J.A.; Romero, F.P.: Concept-matching IR systems versus word-matching information retrieval systems : considering fuzzy interrelations for indexing Web pages (2006) 0.11
    0.111663386 = product of:
      0.22332677 = sum of:
        0.22332677 = sum of:
          0.1826017 = weight(_text_:word in 5288) [ClassicSimilarity], result of:
            0.1826017 = score(doc=5288,freq=8.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.5793041 = fieldWeight in 5288, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5288)
          0.04072506 = weight(_text_:22 in 5288) [ClassicSimilarity], result of:
            0.04072506 = score(doc=5288,freq=2.0), product of:
              0.21051918 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06011691 = queryNorm
              0.19345059 = fieldWeight in 5288, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5288)
      0.5 = coord(1/2)
    
    Abstract
    This article presents a semantic-based Web retrieval system that is capable of retrieving the Web pages that are conceptually related to the implicit concepts of the query. The concept of concept is managed from a fuzzy point of view by means of semantic areas. In this context, the proposed system improves most search engines that are based on matching words. The key of the system is to use a new version of the Fuzzy Interrelations and Synonymy-Based Concept Representation Model (FIS-CRM) to extract and represent the concepts contained in both the Web pages and the user query. This model, which was integrated into other tools such as the Fuzzy Interrelations and Synonymy based Searcher (FISS) metasearcher and the fz-mail system, considers the fuzzy synonymy and the fuzzy generality interrelations as a means of representing word interrelations (stored in a fuzzy synonymy dictionary and ontologies). The new version of the model, which is based on the study of the cooccurrences of synonyms, integrates a soft method for disambiguating word senses. This method also considers the context of the word to be disambiguated and the thematic ontologies and sets of synonyms stored in the dictionary.
    Date
    22. 7.2006 17:14:12
  19. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.11
    0.11084092 = product of:
      0.22168183 = sum of:
        0.22168183 = product of:
          0.33252275 = sum of:
            0.0938188 = weight(_text_:objects in 76) [ClassicSimilarity], result of:
              0.0938188 = score(doc=76,freq=2.0), product of:
                0.31952566 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.06011691 = queryNorm
                0.29361898 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
            0.23870397 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.23870397 = score(doc=76,freq=2.0), product of:
                0.5096718 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.06011691 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    A summary of brain theory is given so far as it is contained within the framework of Localization Theory. Difficulties of this "conventional theory" are traced back to a specific deficiency: there is no way to express relations between active cells (as for instance their representing parts of the same object). A new theory is proposed to cure this deficiency. It introduces a new kind of dynamical control, termed synaptic modulation, according to which synapses switch between a conducting and a non- conducting state. The dynamics of this variable is controlled on a fast time scale by correlations in the temporal fine structure of cellular signals. Furthermore, conventional synaptic plasticity is replaced by a refined version. Synaptic modulation and plasticity form the basis for short-term and long-term memory, respectively. Signal correlations, shaped by the variable network, express structure and relationships within objects. In particular, the figure-ground problem may be solved in this way. Synaptic modulation introduces exibility into cerebral networks which is necessary to solve the invariance problem. Since momentarily useless connections are deactivated, interference between di erent memory traces can be reduced, and memory capacity increased, in comparison with conventional associative memory
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
  20. Qin, J.; Hernández, N.: Building interoperable vocabulary and structures for learning objects : an empirical study (2006) 0.11
    0.10819629 = sum of:
      0.062545866 = product of:
        0.1876376 = sum of:
          0.1876376 = weight(_text_:objects in 4926) [ClassicSimilarity], result of:
            0.1876376 = score(doc=4926,freq=8.0), product of:
              0.31952566 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.06011691 = queryNorm
              0.58723795 = fieldWeight in 4926, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4926)
        0.33333334 = coord(1/3)
      0.045650426 = product of:
        0.09130085 = sum of:
          0.09130085 = weight(_text_:word in 4926) [ClassicSimilarity], result of:
            0.09130085 = score(doc=4926,freq=2.0), product of:
              0.31520873 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.06011691 = queryNorm
              0.28965205 = fieldWeight in 4926, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4926)
        0.5 = coord(1/2)
    
    Abstract
    The structural, functional, and production views on learning objects influence metadata structure and vocabulary. The authors drew on these views and conducted a literature review and in-depth analysis of 14 learning objects and over 500 components in these learning objects to model the knowledge framework for a learning object ontology. The learning object ontology reported in this article consists of 8 top-level classes, 28 classes at the second level, and 34 at the third level. Except class Learning object, all other classes have the three properties of preferred term, related term, and synonym. To validate the ontology, we conducted a query log analysis that focused an discovering what terms users have used at both conceptual and word levels. The findings show that the main classes in the ontology are either conceptually or linguistically similar to the top terms in the query log data. The authors built an "Exercise Editor" as an informal experiment to test its adoption ability in authoring tools. The main contribution of this project is in the framework for the learning object domain and the methodology used to develop and validate an ontology.

Languages

Types

  • a 4011
  • m 412
  • el 248
  • s 177
  • x 47
  • b 39
  • r 27
  • i 25
  • ? 8
  • n 4
  • p 4
  • d 3
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications