Search (5 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Collins, C.: WordNet explorer : applying visualization principles to lexical semantics (2006) 0.01
    0.012835911 = product of:
      0.051343642 = sum of:
        0.051343642 = weight(_text_:research in 1288) [ClassicSimilarity], result of:
          0.051343642 = score(doc=1288,freq=4.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.35662293 = fieldWeight in 1288, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0625 = fieldNorm(doc=1288)
      0.25 = coord(1/4)
    
    Abstract
    Interface designs for lexical databases in NLP have suffered from not following design principles developed in the information visualization research community. We present a design paradigm and show it can be used to generate visualizations which maximize the usability and utility ofWordNet. The techniques can be generally applied to other lexical databases used in NLP research.
  2. Bird, S.; Dale, R.; Dorr, B.; Gibson, B.; Joseph, M.; Kan, M.-Y.; Lee, D.; Powley, B.; Radev, D.; Tan, Y.F.: ¬The ACL Anthology Reference Corpus : a reference dataset for bibliographic research in computational linguistics (2008) 0.01
    0.012835911 = product of:
      0.051343642 = sum of:
        0.051343642 = weight(_text_:research in 2804) [ClassicSimilarity], result of:
          0.051343642 = score(doc=2804,freq=16.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.35662293 = fieldWeight in 2804, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.03125 = fieldNorm(doc=2804)
      0.25 = coord(1/4)
    
    Abstract
    The ACL Anthology is a digital archive of conference and journal papers in natural language processing and computational linguistics. Its primary purpose is to serve as a reference repository of research results, but we believe that it can also be an object of study and a platform for research in its own right. We describe an enriched and standardized reference corpus derived from the ACL Anthology that can be used for research in scholarly document processing. This corpus, which we call the ACL Anthology Reference Corpus (ACL ARC), brings together the recent activities of a number of research groups around the world. Our goal is to make the corpus widely available, and to encourage other researchers to use it as a standard testbed for experiments in both bibliographic and bibliometric research.
    Content
    Vgl. auch: Automatic Term Recognition (ATR) is a research task that deals with the identification of domain-specific terms. Terms, in simple words, are textual realization of significant concepts in an expertise domain. Additionally, domain-specific terms may be classified into a number of categories, in which each category represents a significant concept. A term classification task is often defined on top of an ATR procedure to perform such categorization. For instance, in the biomedical domain, terms can be classified as drugs, proteins, and genes. This is a reference dataset for terminology extraction and classification research in computational linguistics. It is a set of manually annotated terms in English language that are extracted from the ACL Anthology Reference Corpus (ACL ARC). The ACL ARC is a canonicalised and frozen subset of scientific publications in the domain of Human Language Technologies (HLT). It consists of 10,921 articles from 1965 to 2006. The dataset, called ACL RD-TEC, is comprised of more than 69,000 candidate terms that are manually annotated as valid and invalid terms. Furthermore, valid terms are classified as technology and non-technology terms. Technology terms refer to a method, process, or in general a technological concept in the domain of HLT, e.g. machine translation, word sense disambiguation, and language modelling. On the other hand, non-technology terms refer to important concepts other than technological; examples of such terms in the domain of HLT are multilingual lexicon, corpora, word sense, and language model. The dataset is created to serve as a gold standard for the comparison of the algorithms of term recognition and classification. [http://catalog.elra.info/product_info.php?products_id=1236].
  3. Rötzer, F.: Computer ergooglen die Bedeutung von Worten (2005) 0.01
    0.012610171 = product of:
      0.025220342 = sum of:
        0.011605804 = weight(_text_:science in 3385) [ClassicSimilarity], result of:
          0.011605804 = score(doc=3385,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.08730954 = fieldWeight in 3385, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3385)
        0.013614539 = weight(_text_:research in 3385) [ClassicSimilarity], result of:
          0.013614539 = score(doc=3385,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.09456394 = fieldWeight in 3385, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3385)
      0.5 = coord(2/4)
    
    Content
    Mit einem bereits zuvor von Paul Vitanyi und anderen entwickeltem Verfahren, das den Zusammenhang von Objekten misst (normalized information distance - NID ), kann die Nähe zwischen bestimmten Objekten (Bilder, Worte, Muster, Intervalle, Genome, Programme etc.) anhand aller Eigenschaften analysiert und aufgrund der dominanten gemeinsamen Eigenschaft bestimmt werden. Ähnlich können auch die allgemein verwendeten, nicht unbedingt "wahren" Bedeutungen von Namen mit der Google-Suche erschlossen werden. 'At this moment one database stands out as the pinnacle of computer-accessible human knowledge and the most inclusive summary of statistical information: the Google search engine. There can be no doubt that Google has already enabled science to accelerate tremendously and revolutionized the research process. It has dominated the attention of internet users for years, and has recently attracted substantial attention of many Wall Street investors, even reshaping their ideas of company financing.' (Paul Vitanyi und Rudi Cilibrasi) Gibt man ein Wort ein wie beispielsweise "Pferd", erhält man bei Google 4.310.000 indexierte Seiten. Für "Reiter" sind es 3.400.000 Seiten. Kombiniert man beide Begriffe, werden noch 315.000 Seiten erfasst. Für das gemeinsame Auftreten beispielsweise von "Pferd" und "Bart" werden zwar noch immer erstaunliche 67.100 Seiten aufgeführt, aber man sieht schon, dass "Pferd" und "Reiter" enger zusammen hängen. Daraus ergibt sich eine bestimmte Wahrscheinlichkeit für das gemeinsame Auftreten von Begriffen. Aus dieser Häufigkeit, die sich im Vergleich mit der maximalen Menge (5.000.000.000) an indexierten Seiten ergibt, haben die beiden Wissenschaftler eine statistische Größe entwickelt, die sie "normalised Google distance" (NGD) nennen und die normalerweise zwischen 0 und 1 liegt. Je geringer NGD ist, desto enger hängen zwei Begriffe zusammen. "Das ist eine automatische Bedeutungsgenerierung", sagt Vitanyi gegenüber dern New Scientist (4). "Das könnte gut eine Möglichkeit darstellen, einen Computer Dinge verstehen und halbintelligent handeln zu lassen." Werden solche Suchen immer wieder durchgeführt, lässt sich eine Karte für die Verbindungen von Worten erstellen. Und aus dieser Karte wiederum kann ein Computer, so die Hoffnung, auch die Bedeutung der einzelnen Worte in unterschiedlichen natürlichen Sprachen und Kontexten erfassen. So habe man über einige Suchen realisiert, dass ein Computer zwischen Farben und Zahlen unterscheiden, holländische Maler aus dem 17. Jahrhundert und Notfälle sowie Fast-Notfälle auseinander halten oder elektrische oder religiöse Begriffe verstehen könne. Überdies habe eine einfache automatische Übersetzung Englisch-Spanisch bewerkstelligt werden können. Auf diese Weise ließe sich auch, so hoffen die Wissenschaftler, die Bedeutung von Worten erlernen, könne man Spracherkennung verbessern oder ein semantisches Web erstellen und natürlich endlich eine bessere automatische Übersetzung von einer Sprache in die andere realisieren.
  4. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.01
    0.010255679 = product of:
      0.041022714 = sum of:
        0.041022714 = product of:
          0.08204543 = sum of:
            0.08204543 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.08204543 = score(doc=4888,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 3.2013 14:56:22
  5. Griffiths, T.L.; Steyvers, M.: ¬A probabilistic approach to semantic representation (2002) 0.01
    0.007737203 = product of:
      0.030948812 = sum of:
        0.030948812 = weight(_text_:science in 3671) [ClassicSimilarity], result of:
          0.030948812 = score(doc=3671,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.23282544 = fieldWeight in 3671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0625 = fieldNorm(doc=3671)
      0.25 = coord(1/4)
    
    Content
    Paper, Proceedings of the 24th Annual Conference of the Cognitive Science Society. Vgl. auch: https://cocosci.berkeley.edu/publications.php?author=Steyvers,%20M.