Search (8 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Menge-Sonnentag, R.: Google veröffentlicht einen Parser für natürliche Sprache (2016) 0.01
    0.009083348 = product of:
      0.06358343 = sum of:
        0.028745173 = weight(_text_:open in 2941) [ClassicSimilarity], result of:
          0.028745173 = score(doc=2941,freq=2.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.19901526 = fieldWeight in 2941, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.03125 = fieldNorm(doc=2941)
        0.034838263 = weight(_text_:source in 2941) [ClassicSimilarity], result of:
          0.034838263 = score(doc=2941,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.21909484 = fieldWeight in 2941, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.03125 = fieldNorm(doc=2941)
      0.14285715 = coord(2/14)
    
    Abstract
    SyntaxNet zerlegt Sätze in ihre grammatikalischen Bestandteile und bestimmt die syntaktischen Beziehungen der Wörter untereinander. Das Framework ist Open Source und als TensorFlow Model implementiert. Ein Parser für natürliche Sprache ist eine Software, die Sätze in ihre grammatikalischen Bestandteile zerlegt. Diese Zerlegung ist notwendig, damit Computer Befehle verstehen oder Texte übersetzen können. Die digitalen Helfer wie Microsofts Cortana, Apples Siri und Google Now verwenden Parser, um Sätze wie "Stell den Wecker auf 5 Uhr!" richtig umzusetzen. SyntaxNet ist ein solcher Parser, den Google als TensorFlow Model veröffentlicht hat. Entwickler können eigene Modelle erstellen, und SnytaxNet bringt einen vortrainierten Parser für die englische Sprache mit, den seine Macher Parsey McParseface genannt haben.
  2. Liu, P.J.; Saleh, M.; Pot, E.; Goodrich, B.; Sepassi, R.; Kaiser, L.; Shazeer, N.: Generating Wikipedia by summarizing long sequences (2018) 0.00
    0.004354783 = product of:
      0.060966957 = sum of:
        0.060966957 = weight(_text_:source in 773) [ClassicSimilarity], result of:
          0.060966957 = score(doc=773,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.38341597 = fieldWeight in 773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0546875 = fieldNorm(doc=773)
      0.071428575 = coord(1/14)
    
    Abstract
    We show that generating English Wikipedia articles can be approached as a multi-document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.
  3. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.00
    0.0026959025 = product of:
      0.037742633 = sum of:
        0.037742633 = weight(_text_:web in 2861) [ClassicSimilarity], result of:
          0.037742633 = score(doc=2861,freq=8.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.36057037 = fieldWeight in 2861, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
      0.071428575 = coord(1/14)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  4. Wong, W.; Liu, W.; Bennamoun, M.: Ontology learning from text : a look back and into the future (2010) 0.00
    0.0026688075 = product of:
      0.037363302 = sum of:
        0.037363302 = weight(_text_:web in 4733) [ClassicSimilarity], result of:
          0.037363302 = score(doc=4733,freq=4.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.35694647 = fieldWeight in 4733, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4733)
      0.071428575 = coord(1/14)
    
    Abstract
    Ontologies are often viewed as the answer to the need for inter-operable semantics in modern information systems. The explosion of textual information on the "Read/Write" Web coupled with the increasing demand for ontologies to power the Semantic Web have made (semi-)automatic ontology learning from text a very promising research area. This together with the advanced state in related areas such as natural language processing have fuelled research into ontology learning over the past decade. This survey looks at how far we have come since the turn of the millennium, and discusses the remaining challenges that will define the research directions in this area in the near future.
  5. Perovsek, M.; Kranjca, J.; Erjaveca, T.; Cestnika, B.; Lavraca, N.: TextFlows : a visual programming platform for text mining and natural language processing (2016) 0.00
    0.0022875492 = product of:
      0.032025687 = sum of:
        0.032025687 = weight(_text_:web in 2697) [ClassicSimilarity], result of:
          0.032025687 = score(doc=2697,freq=4.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.3059541 = fieldWeight in 2697, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2697)
      0.071428575 = coord(1/14)
    
    Abstract
    Text mining and natural language processing are fast growing areas of research, with numerous applications in business, science and creative industries. This paper presents TextFlows, a web-based text mining and natural language processing platform supporting workflow construction, sharing and execution. The platform enables visual construction of text mining workflows through a web browser, and the execution of the constructed workflows on a processing cloud. This makes TextFlows an adaptable infrastructure for the construction and sharing of text processing workflows, which can be reused in various applications. The paper presents the implemented text mining and language processing modules, and describes some precomposed workflows. Their features are demonstrated on three use cases: comparison of document classifiers and of different part-of-speech taggers on a text categorization problem, and outlier detection in document corpora.
  6. Spitkovsky, V.; Norvig, P.: From words to concepts and back : dictionaries for linking text, entities and ideas (2012) 0.00
    0.0018677762 = product of:
      0.026148865 = sum of:
        0.026148865 = weight(_text_:web in 337) [ClassicSimilarity], result of:
          0.026148865 = score(doc=337,freq=6.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.24981049 = fieldWeight in 337, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=337)
      0.071428575 = coord(1/14)
    
    Abstract
    Human language is both rich and ambiguous. When we hear or read words, we resolve meanings to mental representations, for example recognizing and linking names to the intended persons, locations or organizations. Bridging words and meaning - from turning search queries into relevant results to suggesting targeted keywords for advertisers - is also Google's core competency, and important for many other tasks in information retrieval and natural language processing. We are happy to release a resource, spanning 7,560,141 concepts and 175,100,788 unique text strings, that we hope will help everyone working in these areas. How do we represent concepts? Our approach piggybacks on the unique titles of entries from an encyclopedia, which are mostly proper and common noun phrases. We consider each individual Wikipedia article as representing a concept (an entity or an idea), identified by its URL. Text strings that refer to concepts were collected using the publicly available hypertext of anchors (the text you click on in a web link) that point to each Wikipedia page, thus drawing on the vast link structure of the web. For every English article we harvested the strings associated with its incoming hyperlinks from the rest of Wikipedia, the greater web, and also anchors of parallel, non-English Wikipedia pages. Our dictionaries are cross-lingual, and any concept deemed too fine can be broadened to a desired level of generality using Wikipedia's groupings of articles into hierarchical categories. The data set contains triples, each consisting of (i) text, a short, raw natural language string; (ii) url, a related concept, represented by an English Wikipedia article's canonical location; and (iii) count, an integer indicating the number of times text has been observed connected with the concept's url. Our database thus includes weights that measure degrees of association. For example, the top two entries for football indicate that it is an ambiguous term, which is almost twice as likely to refer to what we in the US call soccer. Vgl. auch: Spitkovsky, V.I., A.X. Chang: A cross-lingual dictionary for english Wikipedia concepts. In: http://nlp.stanford.edu/pubs/crosswikis.pdf.
  7. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.00
    0.0012416071 = product of:
      0.017382499 = sum of:
        0.017382499 = product of:
          0.034764998 = sum of:
            0.034764998 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.034764998 = score(doc=1490,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 3.2015 9:30:24
  8. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.00
    6.2080356E-4 = product of:
      0.008691249 = sum of:
        0.008691249 = product of:
          0.017382499 = sum of:
            0.017382499 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.017382499 = score(doc=4217,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 1.2018 11:32:44