Search (597 results, page 1 of 30)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.11
    0.10836022 = sum of:
      0.08128187 = product of:
        0.24384561 = sum of:
          0.24384561 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24384561 = score(doc=562,freq=2.0), product of:
              0.43387505 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.051176514 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.0062772194 = weight(_text_:in in 562) [ClassicSimilarity], result of:
        0.0062772194 = score(doc=562,freq=2.0), product of:
          0.069613084 = queryWeight, product of:
            1.3602545 = idf(docFreq=30841, maxDocs=44218)
            0.051176514 = queryNorm
          0.09017298 = fieldWeight in 562, product of:
            1.4142135 = tf(freq=2.0), with freq of:
              2.0 = termFreq=2.0
            1.3602545 = idf(docFreq=30841, maxDocs=44218)
            0.046875 = fieldNorm(doc=562)
      0.020801133 = product of:
        0.041602265 = sum of:
          0.041602265 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041602265 = score(doc=562,freq=2.0), product of:
              0.17921144 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051176514 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.06
    0.06143622 = product of:
      0.09215433 = sum of:
        0.08128187 = product of:
          0.24384561 = sum of:
            0.24384561 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24384561 = score(doc=862,freq=2.0), product of:
                0.43387505 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051176514 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.010872464 = weight(_text_:in in 862) [ClassicSimilarity], result of:
          0.010872464 = score(doc=862,freq=6.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.1561842 = fieldWeight in 862, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.6666667 = coord(2/3)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. McCray, A.T.: Natural language research program (1992) 0.05
    0.05169737 = product of:
      0.07754605 = sum of:
        0.014795548 = weight(_text_:in in 7273) [ClassicSimilarity], result of:
          0.014795548 = score(doc=7273,freq=4.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.21253976 = fieldWeight in 7273, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=7273)
        0.0627505 = product of:
          0.125501 = sum of:
            0.125501 = weight(_text_:education in 7273) [ClassicSimilarity], result of:
              0.125501 = score(doc=7273,freq=2.0), product of:
                0.24110512 = queryWeight, product of:
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.051176514 = queryNorm
                0.520524 = fieldWeight in 7273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7273)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Briefly describes the Natural Language Systems Program of the Lister Hill National Center for Biomedical Communications, VA, which conducts research in natural language processing to improve access to biomedical information stored in computerized form
    Source
    International information communication and education. 11(1992) no.2, S.256-258
  4. Zhai, X.: ChatGPT user experience: : implications for education (2022) 0.05
    0.050258808 = product of:
      0.07538821 = sum of:
        0.0052310163 = weight(_text_:in in 849) [ClassicSimilarity], result of:
          0.0052310163 = score(doc=849,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.07514416 = fieldWeight in 849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=849)
        0.07015719 = product of:
          0.14031439 = sum of:
            0.14031439 = weight(_text_:education in 849) [ClassicSimilarity], result of:
              0.14031439 = score(doc=849,freq=10.0), product of:
                0.24110512 = queryWeight, product of:
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.051176514 = queryNorm
                0.58196354 = fieldWeight in 849, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=849)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    ChatGPT, a general-purpose conversation chatbot released on November 30, 2022, by OpenAI, is expected to impact every aspect of society. However, the potential impacts of this NLP tool on education remain unknown. Such impact can be enormous as the capacity of ChatGPT may drive changes to educational learning goals, learning activities, and assessment and evaluation practices. This study was conducted by piloting ChatGPT to write an academic paper, titled Artificial Intelligence for Education (see Appendix A). The piloting result suggests that ChatGPT is able to help researchers write a paper that is coherent, (partially) accurate, informative, and systematic. The writing is extremely efficient (2-3 hours) and involves very limited professional knowledge from the author. Drawing upon the user experience, I reflect on the potential impacts of ChatGPT, as well as similar AI tools, on education. The paper concludes by suggesting adjusting learning goals-students should be able to use AI tools to conduct subject-domain tasks and education should focus on improving students' creativity and critical thinking rather than general skills. To accomplish the learning goals, researchers should design AI-involved learning tasks to engage students in solving real-world problems. ChatGPT also raises concerns that students may outsource assessment tasks. This paper concludes that new formats of assessments are needed to focus on creativity and critical thinking that AI cannot substitute.
  5. Babik, W.: Keywords as linguistic tools in information and knowledge organization (2017) 0.04
    0.03618816 = product of:
      0.054282237 = sum of:
        0.010356884 = weight(_text_:in in 3510) [ClassicSimilarity], result of:
          0.010356884 = score(doc=3510,freq=4.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.14877784 = fieldWeight in 3510, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
        0.043925352 = product of:
          0.087850705 = sum of:
            0.087850705 = weight(_text_:education in 3510) [ClassicSimilarity], result of:
              0.087850705 = score(doc=3510,freq=2.0), product of:
                0.24110512 = queryWeight, product of:
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.051176514 = queryNorm
                0.3643668 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Series
    Fortschritte in der Wissensorganisation; Bd.13
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  6. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.04
    0.03610447 = product of:
      0.054156706 = sum of:
        0.012554439 = weight(_text_:in in 4888) [ClassicSimilarity], result of:
          0.012554439 = score(doc=4888,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.18034597 = fieldWeight in 4888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=4888)
        0.041602265 = product of:
          0.08320453 = sum of:
            0.08320453 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.08320453 = score(doc=4888,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    1. 3.2013 14:56:22
  7. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.04
    0.03610447 = product of:
      0.054156706 = sum of:
        0.012554439 = weight(_text_:in in 5429) [ClassicSimilarity], result of:
          0.012554439 = score(doc=5429,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.18034597 = fieldWeight in 5429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=5429)
        0.041602265 = product of:
          0.08320453 = sum of:
            0.08320453 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.08320453 = score(doc=5429,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Noch immer ist der menschliche Übersetzer dem Computer in sprachlicher Hinsicht überlegen. Zwar ist die Übersetzungssoftware besser geworden, aber die systembedingten Probleme bleiben
    Source
    c't. 2000, H.22, S.230-231
  8. Shen, M.; Liu, D.-R.; Huang, Y.-S.: Extracting semantic relations to enrich domain ontologies (2012) 0.03
    0.03416585 = product of:
      0.051248774 = sum of:
        0.0073234225 = weight(_text_:in in 267) [ClassicSimilarity], result of:
          0.0073234225 = score(doc=267,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.10520181 = fieldWeight in 267, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=267)
        0.043925352 = product of:
          0.087850705 = sum of:
            0.087850705 = weight(_text_:education in 267) [ClassicSimilarity], result of:
              0.087850705 = score(doc=267,freq=2.0), product of:
                0.24110512 = queryWeight, product of:
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.051176514 = queryNorm
                0.3643668 = fieldWeight in 267, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=267)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Domain ontologies facilitate the organization, sharing and reuse of domain knowledge, and enable various vertical domain applications to operate successfully. Most methods for automatically constructing ontologies focus on taxonomic relations, such as is-kind-of and is- part-of relations. However, much of the domain-specific semantics is ignored. This work proposes a semi-unsupervised approach for extracting semantic relations from domain-specific text documents. The approach effectively utilizes text mining and existing taxonomic relations in domain ontologies to discover candidate keywords that can represent semantic relations. A preliminary experiment on the natural science domain (Taiwan K9 education) indicates that the proposed method yields valuable recommendations. This work enriches domain ontologies by adding distilled semantics.
  9. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.03
    0.032976072 = product of:
      0.049464107 = sum of:
        0.014795548 = weight(_text_:in in 1463) [ClassicSimilarity], result of:
          0.014795548 = score(doc=1463,freq=4.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.21253976 = fieldWeight in 1463, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1463)
        0.034668557 = product of:
          0.069337115 = sum of:
            0.069337115 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.069337115 = score(doc=1463,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Chronicles the early history of applying electronic computers to the task of translating natural languages, from the 1st suggestions by Warren Weaver in Mar 1947 to the 1st demonstration of a working, if limited, program in Jan 1954
    Date
    31. 7.1996 9:22:19
  10. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.03
    0.0309666 = product of:
      0.0464499 = sum of:
        0.018715054 = weight(_text_:in in 8521) [ClassicSimilarity], result of:
          0.018715054 = score(doc=8521,freq=10.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.26884392 = fieldWeight in 8521, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=8521)
        0.027734846 = product of:
          0.05546969 = sum of:
            0.05546969 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
              0.05546969 = score(doc=8521,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.30952093 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Presents the state of the art in lexical choice research in text generation and machine translation. Discusses the existing implementations with respect to: the place of lexical choice in the overall generation rates; the information flow within the generation process and the consequences thereof for lexical choice; the internal organization of the lexical choice process; and the phenomena covered by lexical choice. Identifies possible future directions in lexical choice research
    Date
    31. 7.1996 9:22:19
  11. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.03
    0.02815431 = product of:
      0.042231463 = sum of:
        0.014496619 = weight(_text_:in in 6753) [ClassicSimilarity], result of:
          0.014496619 = score(doc=6753,freq=6.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.2082456 = fieldWeight in 6753, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6753)
        0.027734846 = product of:
          0.05546969 = sum of:
            0.05546969 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
              0.05546969 = score(doc=6753,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.30952093 = fieldWeight in 6753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6753)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes and evaluates the results of a large scale lexical learning system, ARISTO-LEX, that uses a combination of probabilisitc and knowledge based methods for the acquisition of selectional restrictions of words in sublanguages. Presents experimental data obtained from different corpora in different doamins and languages, and shows that the acquired lexical data not only have practical applications in natural language processing, but they are useful for a comparative analysis of sublanguages
    Date
    6. 3.1997 16:22:15
  12. Morris, V.: Automated language identification of bibliographic resources (2020) 0.03
    0.02815431 = product of:
      0.042231463 = sum of:
        0.014496619 = weight(_text_:in in 5749) [ClassicSimilarity], result of:
          0.014496619 = score(doc=5749,freq=6.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.2082456 = fieldWeight in 5749, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.027734846 = product of:
          0.05546969 = sum of:
            0.05546969 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.05546969 = score(doc=5749,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  13. Gill, A.J.; Hinrichs-Krapels, S.; Blanke, T.; Grant, J.; Hedges, M.; Tanner, S.: Insight workflow : systematically combining human and computational methods to explore textual data (2017) 0.03
    0.027891522 = product of:
      0.041837282 = sum of:
        0.010462033 = weight(_text_:in in 3682) [ClassicSimilarity], result of:
          0.010462033 = score(doc=3682,freq=8.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.15028831 = fieldWeight in 3682, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3682)
        0.03137525 = product of:
          0.0627505 = sum of:
            0.0627505 = weight(_text_:education in 3682) [ClassicSimilarity], result of:
              0.0627505 = score(doc=3682,freq=2.0), product of:
                0.24110512 = queryWeight, product of:
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.051176514 = queryNorm
                0.260262 = fieldWeight in 3682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3682)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Analyzing large quantities of real-world textual data has the potential to provide new insights for researchers. However, such data present challenges for both human and computational methods, requiring a diverse range of specialist skills, often shared across a number of individuals. In this paper we use the analysis of a real-world data set as our case study, and use this exploration as a demonstration of our "insight workflow," which we present for use and adaptation by other researchers. The data we use are impact case study documents collected as part of the UK Research Excellence Framework (REF), consisting of 6,679 documents and 6.25 million words; the analysis was commissioned by the Higher Education Funding Council for England (published as report HEFCE 2015). In our exploration and analysis we used a variety of techniques, ranging from keyword in context and frequency information to more sophisticated methods (topic modeling), with these automated techniques providing an empirical point of entry for in-depth and intensive human analysis. We present the 60 topics to demonstrate the output of our methods, and illustrate how the variety of analysis techniques can be combined to provide insights. We note potential limitations and propose future work.
  14. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.03
    0.027095776 = product of:
      0.040643662 = sum of:
        0.016375672 = weight(_text_:in in 3840) [ClassicSimilarity], result of:
          0.016375672 = score(doc=3840,freq=10.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.23523843 = fieldWeight in 3840, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3840)
        0.02426799 = product of:
          0.04853598 = sum of:
            0.04853598 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.04853598 = score(doc=3840,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Linguistics is the scientific study of language which emphasizes language spoken in everyday settings by human beings. It has a long history of interdisciplinarity, both internally and in contribution to other fields, including information science. A linguistic perspective is beneficial in many ways in information science, since it examines the relationship between the forms of meaningful expressions and their social, cognitive, institutional, and communicative context, these being two perspectives on information that are actively studied, to different degrees, in information science. Examples of issues relevant to information science are presented for which the approach taken under a linguistic perspective is illustrated.
    Date
    27. 8.2011 14:22:33
  15. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.03
    0.026380857 = product of:
      0.039571285 = sum of:
        0.011836439 = weight(_text_:in in 6752) [ClassicSimilarity], result of:
          0.011836439 = score(doc=6752,freq=4.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.17003182 = fieldWeight in 6752, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6752)
        0.027734846 = product of:
          0.05546969 = sum of:
            0.05546969 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.05546969 = score(doc=6752,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    AutoSlog is a system that addresses the knowledge engineering bottleneck for information extraction. AutoSlog automatically creates domain specific dictionaries for information extraction, given an appropriate training corpus. Describes experiments with AutoSlog in terrorism, joint ventures and microelectronics domains. Compares the performance of AutoSlog across the 3 domains, discusses the lessons learned and presents results from 2 experiments which demonstrate that novice users can generate effective dictionaries using AutoSlog
    Date
    6. 3.1997 16:22:15
  16. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.03
    0.026380857 = product of:
      0.039571285 = sum of:
        0.011836439 = weight(_text_:in in 7415) [ClassicSimilarity], result of:
          0.011836439 = score(doc=7415,freq=4.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.17003182 = fieldWeight in 7415, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=7415)
        0.027734846 = product of:
          0.05546969 = sum of:
            0.05546969 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
              0.05546969 = score(doc=7415,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.30952093 = fieldWeight in 7415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
  17. Kay, M.: ¬The proper place of men and machines in language translation (1997) 0.03
    0.025943225 = product of:
      0.038914837 = sum of:
        0.014646845 = weight(_text_:in in 1178) [ClassicSimilarity], result of:
          0.014646845 = score(doc=1178,freq=8.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.21040362 = fieldWeight in 1178, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1178)
        0.02426799 = product of:
          0.04853598 = sum of:
            0.04853598 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.04853598 = score(doc=1178,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.2708308 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Machine translation stands no chance of filling actual needs for translation because, although there has been progress in relevant areas of computer science, advance in linguistics have not touched the core problems. Cooperative man-machine systems need to be developed, Proposes a translator's amanuensis, incorporating into a word processor some simple facilities peculiar to translation. Gradual enhancements of such a system could lead to the original goal of machine translation
    Content
    Reprint of a Xerox PARC Working Paper which appeared in 1980
    Date
    31. 7.1996 9:22:19
  18. Godby, J.: WordSmith research project bridges gap between tokens and indexes (1998) 0.03
    0.025943225 = product of:
      0.038914837 = sum of:
        0.014646845 = weight(_text_:in in 4729) [ClassicSimilarity], result of:
          0.014646845 = score(doc=4729,freq=8.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.21040362 = fieldWeight in 4729, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4729)
        0.02426799 = product of:
          0.04853598 = sum of:
            0.04853598 = weight(_text_:22 in 4729) [ClassicSimilarity], result of:
              0.04853598 = score(doc=4729,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.2708308 = fieldWeight in 4729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4729)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Reports on an OCLC natural language processing research project to develop methods for identifying terminology in unstructured electronic text, especially material associated with new cultural trends and emerging subjects. Current OCLC production software can only identify single words as indexable terms in full text documents, thus a major goal of the WordSmith project is to develop software that can automatically identify and intelligently organize phrases for uses in database indexes. By analyzing user terminology from local newspapers in the USA, the latest cultural trends and technical developments as well as personal and geographic names have been drawm out. Notes that this new vocabulary can also be mapped into reference works
    Source
    OCLC newsletter. 1998, no.234, Jul/Aug, S.22-24
  19. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.03
    0.025943225 = product of:
      0.038914837 = sum of:
        0.014646845 = weight(_text_:in in 5483) [ClassicSimilarity], result of:
          0.014646845 = score(doc=5483,freq=8.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.21040362 = fieldWeight in 5483, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5483)
        0.02426799 = product of:
          0.04853598 = sum of:
            0.04853598 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
              0.04853598 = score(doc=5483,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.2708308 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper gives an outline of the final results of the TransRouter project. In the scope of this project a decision support system for translation managers has been developed, which will support the selection of appropriate routes for translation projects. In this paper emphasis is put on the decision model, which is based on a stepwise refined assessment of translation routes. The workflow of using this system is considered as well
    Date
    10.12.2000 18:22:35
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  20. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.02
    0.02488513 = product of:
      0.037327692 = sum of:
        0.012813321 = weight(_text_:in in 2541) [ClassicSimilarity], result of:
          0.012813321 = score(doc=2541,freq=12.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.18406484 = fieldWeight in 2541, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.024514372 = product of:
          0.049028743 = sum of:
            0.049028743 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.049028743 = score(doc=2541,freq=4.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29

Years

Languages

Types

  • a 469
  • m 73
  • el 71
  • s 28
  • x 13
  • p 7
  • b 2
  • d 1
  • n 1
  • r 1
  • More… Less…

Subjects

Classifications