Search (67 results, page 1 of 4)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × year_i:[2010 TO 2020}
  1. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.15
    0.1471357 = product of:
      0.2942714 = sum of:
        0.22512795 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.22512795 = score(doc=563,freq=2.0), product of:
            0.4005707 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047248192 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.06914345 = sum of:
          0.030734586 = weight(_text_:science in 563) [ClassicSimilarity], result of:
            0.030734586 = score(doc=563,freq=4.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.24694869 = fieldWeight in 563, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.038408864 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.038408864 = score(doc=563,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.5 = coord(2/4)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  2. Keselman, A.; Rosemblat, G.; Kilicoglu, H.; Fiszman, M.; Jin, H.; Shin, D.; Rindflesch, T.C.: Adapting semantic natural language processing technology to address information overload in influenza epidemic management (2010) 0.03
    0.025495915 = product of:
      0.05099183 = sum of:
        0.041936565 = weight(_text_:management in 1312) [ClassicSimilarity], result of:
          0.041936565 = score(doc=1312,freq=4.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.2633291 = fieldWeight in 1312, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1312)
        0.009055263 = product of:
          0.018110527 = sum of:
            0.018110527 = weight(_text_:science in 1312) [ClassicSimilarity], result of:
              0.018110527 = score(doc=1312,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.1455159 = fieldWeight in 1312, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1312)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The explosion of disaster health information results in information overload among response professionals. The objective of this project was to determine the feasibility of applying semantic natural language processing (NLP) technology to addressing this overload. The project characterizes concepts and relationships commonly used in disaster health-related documents on influenza pandemics, as the basis for adapting an existing semantic summarizer to the domain. Methods include human review and semantic NLP analysis of a set of relevant documents. This is followed by a pilot test in which two information specialists use the adapted application for a realistic information-seeking task. According to the results, the ontology of influenza epidemics management can be described via a manageable number of semantic relationships that involve concepts from a limited number of semantic types. Test users demonstrate several ways to engage with the application to obtain useful information. This suggests that existing semantic NLP algorithms can be adapted to support information summarization and visualization in influenza epidemics and other disaster health areas. However, additional research is needed in the areas of terminology development (as many relevant relationships and terms are not part of existing standardized vocabularies), NLP, and user interface design.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2531-2543
  3. Lu, K.; Cai, X.; Ajiferuke, I.; Wolfram, D.: Vocabulary size and its effect on topic representation (2017) 0.03
    0.025475822 = product of:
      0.050951645 = sum of:
        0.035584353 = weight(_text_:management in 3414) [ClassicSimilarity], result of:
          0.035584353 = score(doc=3414,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 3414, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=3414)
        0.015367293 = product of:
          0.030734586 = sum of:
            0.030734586 = weight(_text_:science in 3414) [ClassicSimilarity], result of:
              0.030734586 = score(doc=3414,freq=4.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.24694869 = fieldWeight in 3414, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3414)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This study investigates how computational overhead for topic model training may be reduced by selectively removing terms from the vocabulary of text corpora being modeled. We compare the impact of removing singly occurring terms, the top 0.5%, 1% and 5% most frequently occurring terms and both top 0.5% most frequent and singly occurring terms, along with changes in the number of topics modeled (10, 20, 30, 40, 50, 100) using three datasets. Four outcome measures are compared. The removal of singly occurring terms has little impact on outcomes for all of the measures tested. Document discriminative capacity, as measured by the document space density, is reduced by the removal of frequently occurring terms, but increases with higher numbers of topics. Vocabulary size does not greatly influence entropy, but entropy is affected by the number of topics. Finally, topic similarity, as measured by pairwise topic similarity and Jensen-Shannon divergence, decreases with the removal of frequent terms. The findings have implications for information science research in information retrieval and informetrics that makes use of topic modeling.
    Content
    Vgl.: http://www.sciencedirect.com/science/article/pii/S0306457317300298.
    Source
    Information processing and management. 53(2017) no.3, S.653-665
  4. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.02
    0.023577852 = product of:
      0.047155704 = sum of:
        0.03595312 = weight(_text_:management in 3807) [ClassicSimilarity], result of:
          0.03595312 = score(doc=3807,freq=6.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22575769 = fieldWeight in 3807, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3807)
        0.011202586 = product of:
          0.022405172 = sum of:
            0.022405172 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
              0.022405172 = score(doc=3807,freq=2.0), product of:
                0.16545512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047248192 = queryNorm
                0.1354154 = fieldWeight in 3807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3807)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 67(2015) no.2, S.203-229
  5. AL-Smadi, M.; Jaradat, Z.; AL-Ayyoub, M.; Jararweh, Y.: Paraphrase identification and semantic text similarity analysis in Arabic news tweets using lexical, syntactic, and semantic features (2017) 0.02
    0.023225334 = product of:
      0.046450667 = sum of:
        0.035584353 = weight(_text_:management in 5095) [ClassicSimilarity], result of:
          0.035584353 = score(doc=5095,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 5095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=5095)
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 5095) [ClassicSimilarity], result of:
              0.021732632 = score(doc=5095,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 5095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5095)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http://www.sciencedirect.com/science/article/pii/S0306457316302382.
    Source
    Information processing and management. 53(2017) no.3, S.640-652
  6. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.02
    0.015035374 = product of:
      0.060141496 = sum of:
        0.060141496 = sum of:
          0.021732632 = weight(_text_:science in 1848) [ClassicSimilarity], result of:
            0.021732632 = score(doc=1848,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.17461908 = fieldWeight in 1848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=1848)
          0.038408864 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
            0.038408864 = score(doc=1848,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.23214069 = fieldWeight in 1848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1848)
      0.25 = coord(1/4)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.6, S.1106-1123
  7. Colace, F.; Santo, M. De; Greco, L.; Napoletano, P.: Weighted word pairs for query expansion (2015) 0.01
    0.01037877 = product of:
      0.04151508 = sum of:
        0.04151508 = weight(_text_:management in 2687) [ClassicSimilarity], result of:
          0.04151508 = score(doc=2687,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.2606825 = fieldWeight in 2687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2687)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 51(2015) no.1, S.179-193
  8. Doko, A.; Stula, , M.; Seric, L.: Improved sentence retrieval using local context and sentence length (2013) 0.01
    0.008896088 = product of:
      0.035584353 = sum of:
        0.035584353 = weight(_text_:management in 2705) [ClassicSimilarity], result of:
          0.035584353 = score(doc=2705,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 2705, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2705)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 49(2013) no.6, S.1301-1312
  9. Fernández, R.T.; Losada, D.E.: Effective sentence retrieval based on query-independent evidence (2012) 0.01
    0.008896088 = product of:
      0.035584353 = sum of:
        0.035584353 = weight(_text_:management in 2728) [ClassicSimilarity], result of:
          0.035584353 = score(doc=2728,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 2728, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2728)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 48(2012) no.6, S.1203-1229
  10. Clark, M.; Kim, Y.; Kruschwitz, U.; Song, D.; Albakour, D.; Dignum, S.; Beresi, U.C.; Fasli, M.; Roeck, A De: Automatically structuring domain knowledge from text : an overview of current research (2012) 0.01
    0.008896088 = product of:
      0.035584353 = sum of:
        0.035584353 = weight(_text_:management in 2738) [ClassicSimilarity], result of:
          0.035584353 = score(doc=2738,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 2738, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2738)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 48(2012) no.3, S.552-568
  11. Engerer, V.: Exploring interdisciplinary relationships between linguistics and information retrieval from the 1960s to today (2017) 0.01
    0.008590578 = product of:
      0.034362312 = sum of:
        0.034362312 = product of:
          0.068724625 = sum of:
            0.068724625 = weight(_text_:science in 3434) [ClassicSimilarity], result of:
              0.068724625 = score(doc=3434,freq=20.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.55219406 = fieldWeight in 3434, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3434)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This article explores how linguistics has influenced information retrieval (IR) and attempts to explain the impact of linguistics through an analysis of internal developments in information science generally, and IR in particular. It notes that information science/IR has been evolving from a case science into a fully fledged, "disciplined"/disciplinary science. The article establishes correspondences between linguistics and information science/IR using the three established IR paradigms-physical, cognitive, and computational-as a frame of reference. The current relationship between information science/IR and linguistics is elucidated through discussion of some recent information science publications dealing with linguistic topics and a novel technique, "keyword collocation analysis," is introduced. Insights from interdisciplinarity research and case theory are also discussed. It is demonstrated that the three stages of interdisciplinarity, namely multidisciplinarity, interdisciplinarity (in the narrow sense), and transdisciplinarity, can be linked to different phases of the information science/IR-linguistics relationship and connected to different ways of using linguistic theory in information science and IR.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.660-680
  12. Brychcín, T.; Konopík, M.: HPS: High precision stemmer (2015) 0.01
    0.007413407 = product of:
      0.029653627 = sum of:
        0.029653627 = weight(_text_:management in 2686) [ClassicSimilarity], result of:
          0.029653627 = score(doc=2686,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2686)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 51(2015) no.1, S.68-91
  13. Gencosman, B.C.; Ozmutlu, H.C.; Ozmutlu, S.: Character n-gram application for automatic new topic identification (2014) 0.01
    0.007413407 = product of:
      0.029653627 = sum of:
        0.029653627 = weight(_text_:management in 2688) [ClassicSimilarity], result of:
          0.029653627 = score(doc=2688,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 2688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2688)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 50(2014) no.6, S.821-856
  14. Sankarasubramaniam, Y.; Ramanathan, K.; Ghosh, S.: Text summarization using Wikipedia (2014) 0.01
    0.007413407 = product of:
      0.029653627 = sum of:
        0.029653627 = weight(_text_:management in 2693) [ClassicSimilarity], result of:
          0.029653627 = score(doc=2693,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 2693, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2693)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 50(2014) no.3, S.443-461
  15. Agarwal, B.; Ramampiaro, H.; Langseth, H.; Ruocco, M.: ¬A deep network model for paraphrase detection in short text messages (2018) 0.01
    0.007413407 = product of:
      0.029653627 = sum of:
        0.029653627 = weight(_text_:management in 5043) [ClassicSimilarity], result of:
          0.029653627 = score(doc=5043,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 5043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5043)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 54(2018) no.6, S.922-937
  16. K., Vani; Gupta, D.: Unmasking text plagiarism using syntactic-semantic based natural language processing techniques : comparisons, analysis and challenges (2018) 0.01
    0.007413407 = product of:
      0.029653627 = sum of:
        0.029653627 = weight(_text_:management in 5084) [ClassicSimilarity], result of:
          0.029653627 = score(doc=5084,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 5084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5084)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 54(2018) no.3, S.408-432
  17. Fang, L.; Tuan, L.A.; Hui, S.C.; Wu, L.: Syntactic based approach for grammar question retrieval (2018) 0.01
    0.007413407 = product of:
      0.029653627 = sum of:
        0.029653627 = weight(_text_:management in 5086) [ClassicSimilarity], result of:
          0.029653627 = score(doc=5086,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 5086, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5086)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 54(2018) no.2, S.184-202
  18. Kocijan, K.: Visualizing natural language resources (2015) 0.01
    0.0064030383 = product of:
      0.025612153 = sum of:
        0.025612153 = product of:
          0.051224306 = sum of:
            0.051224306 = weight(_text_:science in 2995) [ClassicSimilarity], result of:
              0.051224306 = score(doc=2995,freq=4.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.41158113 = fieldWeight in 2995, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2995)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  19. Belbachir, F.; Boughanem, M.: Using language models to improve opinion detection (2018) 0.01
    0.0059307255 = product of:
      0.023722902 = sum of:
        0.023722902 = weight(_text_:management in 5044) [ClassicSimilarity], result of:
          0.023722902 = score(doc=5044,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.14896142 = fieldWeight in 5044, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=5044)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 54(2018) no.6, S.958-968
  20. Perovsek, M.; Kranjca, J.; Erjaveca, T.; Cestnika, B.; Lavraca, N.: TextFlows : a visual programming platform for text mining and natural language processing (2016) 0.00
    0.0047052535 = product of:
      0.018821014 = sum of:
        0.018821014 = product of:
          0.03764203 = sum of:
            0.03764203 = weight(_text_:science in 2697) [ClassicSimilarity], result of:
              0.03764203 = score(doc=2697,freq=6.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.30244917 = fieldWeight in 2697, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2697)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Text mining and natural language processing are fast growing areas of research, with numerous applications in business, science and creative industries. This paper presents TextFlows, a web-based text mining and natural language processing platform supporting workflow construction, sharing and execution. The platform enables visual construction of text mining workflows through a web browser, and the execution of the constructed workflows on a processing cloud. This makes TextFlows an adaptable infrastructure for the construction and sharing of text processing workflows, which can be reused in various applications. The paper presents the implemented text mining and language processing modules, and describes some precomposed workflows. Their features are demonstrated on three use cases: comparison of document classifiers and of different part-of-speech taggers on a text categorization problem, and outlier detection in document corpora.
    Content
    Vgl.: http://www.sciencedirect.com/science/article/pii/S0167642316000113. Vgl. auch: http://textflows.org.
    Source
    Science of computer programming. In Press, 2016

Types

  • a 63
  • el 4
  • x 2
  • m 1
  • More… Less…