Search (32 results, page 1 of 2)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.06
    0.063893676 = product of:
      0.09584051 = sum of:
        0.07803193 = product of:
          0.23409578 = sum of:
            0.23409578 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.23409578 = score(doc=862,freq=2.0), product of:
                0.41652718 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049130294 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.017808583 = weight(_text_:of in 862) [ClassicSimilarity], result of:
          0.017808583 = score(doc=862,freq=10.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.23179851 = fieldWeight in 862, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.6666667 = coord(2/3)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Morris, V.: Automated language identification of bibliographic resources (2020) 0.04
    0.035091337 = product of:
      0.052637003 = sum of:
        0.026011098 = weight(_text_:of in 5749) [ClassicSimilarity], result of:
          0.026011098 = score(doc=5749,freq=12.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.33856338 = fieldWeight in 5749, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.026625905 = product of:
          0.05325181 = sum of:
            0.05325181 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.05325181 = score(doc=5749,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  3. ¬Der Student aus dem Computer (2023) 0.02
    0.015531778 = product of:
      0.046595335 = sum of:
        0.046595335 = product of:
          0.09319067 = sum of:
            0.09319067 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09319067 = score(doc=1079,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 1.2023 16:22:55
  4. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.01
    0.008875302 = product of:
      0.026625905 = sum of:
        0.026625905 = product of:
          0.05325181 = sum of:
            0.05325181 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
              0.05325181 = score(doc=835,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.30952093 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    29.12.2022 18:22:55
  5. Rieger, F.: Lügende Computer (2023) 0.01
    0.008875302 = product of:
      0.026625905 = sum of:
        0.026625905 = product of:
          0.05325181 = sum of:
            0.05325181 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
              0.05325181 = score(doc=912,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.30952093 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    16. 3.2023 19:22:55
  6. Zaitseva, E.M.: Developing linguistic tools of thematic search in library information systems (2023) 0.01
    0.008849156 = product of:
      0.026547467 = sum of:
        0.026547467 = weight(_text_:of in 1187) [ClassicSimilarity], result of:
          0.026547467 = score(doc=1187,freq=32.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.34554482 = fieldWeight in 1187, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1187)
      0.33333334 = coord(1/3)
    
    Abstract
    Within the R&D program "Information support of research by scientists and specialists on the basis of RNPLS&T Open Archive - the system of scientific knowledge aggregation", the RNPLS&T analyzes the use of linguistic tools of thematic search in the modern library information systems and the prospects for their development. The author defines the key common characteristics of e-catalogs of the largest Russian libraries revealed at the first stage of the analysis. Based on the specified common characteristics and detailed comparison analysis, the author outlines and substantiates the vectors for enhancing search inter faces of e-catalogs. The focus is made on linguistic tools of thematic search in library information systems; the key vectors are suggested: use of thematic search at different search levels with the clear-cut level differentiation; use of combined functionality within thematic search system; implementation of classification search in all e-catalogs; hierarchical representation of classifications; use of the matching systems for classification information retrieval languages, and in the long term classification and verbal information retrieval languages, and various verbal information retrieval languages. The author formulates practical recommendations to improve thematic search in library information systems.
  7. Lee, G.E.; Sun, A.: Understanding the stability of medical concept embeddings (2021) 0.01
    0.008568158 = product of:
      0.025704475 = sum of:
        0.025704475 = weight(_text_:of in 159) [ClassicSimilarity], result of:
          0.025704475 = score(doc=159,freq=30.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.33457235 = fieldWeight in 159, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=159)
      0.33333334 = coord(1/3)
    
    Abstract
    Frequency is one of the major factors for training quality word embeddings. Several studies have recently discussed the stability of word embeddings in general domain and suggested factors influencing the stability. In this work, we conduct a detailed analysis on the stability of concept embeddings in medical domain, particularly in relations with concept frequency. The analysis reveals the surprising high stability of low-frequency concepts: low-frequency (<100) concepts have the same high stability as high-frequency (>1,000) concepts. To develop a deeper understanding of this finding, we propose a new factor, the noisiness of context words, which influences the stability of medical concept embeddings regardless of high or low frequency. We evaluate the proposed factor by showing the linear correlation with the stability of medical concept embeddings. The correlations are clear and consistent with various groups of medical concepts. Based on the linear relations, we make suggestions on ways to adjust the noisiness of context words for the improvement of stability. Finally, we demonstrate that the linear relation of the proposed factor extends to the word embedding stability in general domain.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.3, S.346-356
  8. Shree, P.: ¬The journey of Open AI GPT models (2020) 0.01
    0.008395046 = product of:
      0.025185138 = sum of:
        0.025185138 = weight(_text_:of in 869) [ClassicSimilarity], result of:
          0.025185138 = score(doc=869,freq=20.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.32781258 = fieldWeight in 869, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=869)
      0.33333334 = coord(1/3)
    
    Abstract
    Generative Pre-trained Transformer (GPT) models by OpenAI have taken natural language processing (NLP) community by storm by introducing very powerful language models. These models can perform various NLP tasks like question answering, textual entailment, text summarisation etc. without any supervised training. These language models need very few to no examples to understand the tasks and perform equivalent or even better than the state-of-the-art models trained in supervised fashion. In this article we will cover the journey of these models and understand how they have evolved over a period of 2 years. 1. Discussion of GPT-1 paper (Improving Language Understanding by Generative Pre-training). 2. Discussion of GPT-2 paper (Language Models are unsupervised multitask learners) and its subsequent improvements over GPT-1. 3. Discussion of GPT-3 paper (Language models are few shot learners) and the improvements which have made it one of the most powerful models NLP has seen till date. This article assumes familiarity with the basics of NLP terminologies and transformer architecture.
    Source
    https://medium.com/walmartglobaltech/the-journey-of-open-ai-gpt-models-32d95b7b7fb2
  9. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.01
    0.0081944335 = product of:
      0.024583299 = sum of:
        0.024583299 = weight(_text_:of in 1139) [ClassicSimilarity], result of:
          0.024583299 = score(doc=1139,freq=14.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.31997898 = fieldWeight in 1139, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1139)
      0.33333334 = coord(1/3)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  10. Pepper, S.; Arnaud, P.J.L.: Absolutely PHAB : toward a general model of associative relations (2020) 0.01
    0.007976521 = product of:
      0.023929562 = sum of:
        0.023929562 = weight(_text_:of in 103) [ClassicSimilarity], result of:
          0.023929562 = score(doc=103,freq=26.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.31146988 = fieldWeight in 103, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=103)
      0.33333334 = coord(1/3)
    
    Abstract
    There have been many attempts at classifying the semantic modification relations (R) of N + N compounds but this work has not led to the acceptance of a definitive scheme, so that devising a reusable classification is a worthwhile aim. The scope of this undertaking is extended to other binominal lexemes, i.e. units that contain two thing-morphemes without explicitly stating R, like prepositional units, N + relational adjective units, etc. The 25-relation taxonomy of Bourque (2014) was tested against over 15,000 binominal lexemes from 106 languages and extended to a 29-relation scheme ("Bourque2") through the introduction of two new reversible relations. Bourque2 is then mapped onto Hatcher's (1960) four-relation scheme (extended by the addition of a fifth relation, similarity , as "Hatcher2"). This results in a two-tier system usable at different degrees of granularities. On account of its semantic proximity to compounding, metonymy is then taken into account, following Janda's (2011) suggestion that it plays a role in word formation; Peirsman and Geeraerts' (2006) inventory of 23 metonymic patterns is mapped onto Bourque2, confirming the identity of metonymic and binominal modification relations. Finally, Blank's (2003) and Koch's (2001) work on lexical semantics justifies the addition to the scheme of a third, superordinate level which comprises the three Aristotelean principles of similarity, contiguity and contrast.
  11. Harari, Y.N.: ¬[Yuval-Noah-Harari-argues-that] AI has hacked the operating system of human civilisation (2023) 0.01
    0.007663594 = product of:
      0.022990782 = sum of:
        0.022990782 = weight(_text_:of in 953) [ClassicSimilarity], result of:
          0.022990782 = score(doc=953,freq=6.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2992506 = fieldWeight in 953, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=953)
      0.33333334 = coord(1/3)
    
    Abstract
    Storytelling computers will change the course of human history, says the historian and philosopher.
    Source
    https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation?giftId=6982bba3-94bc-441d-9153-6d42468817ad
  12. Meng, K.; Ba, Z.; Ma, Y.; Li, G.: ¬A network coupling approach to detecting hierarchical linkages between science and technology (2024) 0.01
    0.0075087575 = product of:
      0.022526272 = sum of:
        0.022526272 = weight(_text_:of in 1205) [ClassicSimilarity], result of:
          0.022526272 = score(doc=1205,freq=16.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 1205, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1205)
      0.33333334 = coord(1/3)
    
    Abstract
    Detecting science-technology hierarchical linkages is beneficial for understanding deep interactions between science and technology (S&T). Previous studies have mainly focused on linear linkages between S&T but ignored their structural linkages. In this paper, we propose a network coupling approach to inspect hierarchical interactions of S&T by integrating their knowledge linkages and structural linkages. S&T knowledge networks are first enhanced with bidirectional encoder representation from transformers (BERT) knowledge alignment, and then their hierarchical structures are identified based on K-core decomposition. Hierarchical coupling preferences and strengths of the S&T networks over time are further calculated based on similarities of coupling nodes' degree distribution and similarities of coupling edges' weight distribution. Extensive experimental results indicate that our approach is feasible and robust in identifying the coupling hierarchy with superior performance compared to other isomorphism and dissimilarity algorithms. Our research extends the mindset of S&T linkage measurement by identifying patterns and paths of the interaction of S&T hierarchical knowledge.
    Source
    Journal of the Association for Information Science and Technology. 75(2023) no.2, S.167-187
  13. Xiang, R.; Chersoni, E.; Lu, Q.; Huang, C.-R.; Li, W.; Long, Y.: Lexical data augmentation for sentiment analysis (2021) 0.01
    0.007337332 = product of:
      0.022011995 = sum of:
        0.022011995 = weight(_text_:of in 392) [ClassicSimilarity], result of:
          0.022011995 = score(doc=392,freq=22.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.28651062 = fieldWeight in 392, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=392)
      0.33333334 = coord(1/3)
    
    Abstract
    Machine learning methods, especially deep learning models, have achieved impressive performance in various natural language processing tasks including sentiment analysis. However, deep learning models are more demanding for training data. Data augmentation techniques are widely used to generate new instances based on modifications to existing data or relying on external knowledge bases to address annotated data scarcity, which hinders the full potential of machine learning techniques. This paper presents our work using part-of-speech (POS) focused lexical substitution for data augmentation (PLSDA) to enhance the performance of machine learning algorithms in sentiment analysis. We exploit POS information to identify words to be replaced and investigate different augmentation strategies to find semantically related substitutions when generating new instances. The choice of POS tags as well as a variety of strategies such as semantic-based substitution methods and sampling methods are discussed in detail. Performance evaluation focuses on the comparison between PLSDA and two previous lexical substitution-based data augmentation methods, one of which is thesaurus-based, and the other is lexicon manipulation based. Our approach is tested on five English sentiment analysis benchmarks: SST-2, MR, IMDB, Twitter, and AirRecord. Hyperparameters such as the candidate similarity threshold and number of newly generated instances are optimized. Results show that six classifiers (SVM, LSTM, BiLSTM-AT, bidirectional encoder representations from transformers [BERT], XLNet, and RoBERTa) trained with PLSDA achieve accuracy improvement of more than 0.6% comparing to two previous lexical substitution methods averaged on five benchmarks. Introducing POS constraint and well-designed augmentation strategies can improve the reliability of lexical data augmentation methods. Consequently, PLSDA significantly improves the performance of sentiment analysis algorithms.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.11, S.1432-1447
  14. Andrushchenko, M.; Sandberg, K.; Turunen, R.; Marjanen, J.; Hatavara, M.; Kurunmäki, J.; Nummenmaa, T.; Hyvärinen, M.; Teräs, K.; Peltonen, J.; Nummenmaa, J.: Using parsed and annotated corpora to analyze parliamentarians' talk in Finland (2022) 0.01
    0.007337332 = product of:
      0.022011995 = sum of:
        0.022011995 = weight(_text_:of in 471) [ClassicSimilarity], result of:
          0.022011995 = score(doc=471,freq=22.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.28651062 = fieldWeight in 471, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=471)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a search system for grammatically analyzed corpora of Finnish parliamentary records and interviews with former parliamentarians, annotated with metadata of talk structure and involved parliamentarians, and discuss their use through carefully chosen digital humanities case studies. We first introduce the construction, contents, and principles of use of the corpora. Then we discuss the application of the search system and the corpora to study how politicians talk about power, how ideological terms are used in political speech, and how to identify narratives in the data. All case studies stem from questions in the humanities and the social sciences, but rely on the grammatically parsed corpora in both identifying and quantifying passages of interest. Finally, the paper discusses the role of natural language processing methods for questions in the (digital) humanities. It makes the claim that a digital humanities inquiry of parliamentary speech and interviews with politicians cannot only rely on computational humanities modeling, but needs to accommodate a range of perspectives starting with simple searches, quantitative exploration, and ending with modeling. Furthermore, the digital humanities need a more thorough discussion about how the utilization of tools from information science and technologies alter the research questions posed in the humanities.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.2, S.288-302
  15. Zhang, Y.; Zhang, C.; Li, J.: Joint modeling of characters, words, and conversation contexts for microblog keyphrase extraction (2020) 0.01
    0.006995871 = product of:
      0.020987613 = sum of:
        0.020987613 = weight(_text_:of in 5816) [ClassicSimilarity], result of:
          0.020987613 = score(doc=5816,freq=20.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.27317715 = fieldWeight in 5816, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5816)
      0.33333334 = coord(1/3)
    
    Abstract
    Millions of messages are produced on microblog platforms every day, leading to the pressing need for automatic identification of key points from the massive texts. To absorb salient content from the vast bulk of microblog posts, this article focuses on the task of microblog keyphrase extraction. In previous work, most efforts treat messages as independent documents and might suffer from the data sparsity problem exhibited in short and informal microblog posts. On the contrary, we propose to enrich contexts via exploiting conversations initialized by target posts and formed by their replies, which are generally centered around relevant topics to the target posts and therefore helpful for keyphrase identification. Concretely, we present a neural keyphrase extraction framework, which has 2 modules: a conversation context encoder and a keyphrase tagger. The conversation context encoder captures indicative representation from their conversation contexts and feeds the representation into the keyphrase tagger, and the keyphrase tagger extracts salient words from target posts. The 2 modules were trained jointly to optimize the conversation context encoding and keyphrase extraction processes. In the conversation context encoder, we leverage hierarchical structures to capture the word-level indicative representation and message-level indicative representation hierarchically. In both of the modules, we apply character-level representations, which enables the model to explore morphological features and deal with the out-of-vocabulary problem caused by the informal language style of microblog messages. Extensive comparison results on real-life data sets indicate that our model outperforms state-of-the-art models from previous studies.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.5, S.553-567
  16. Soni, S.; Lerman, K.; Eisenstein, J.: Follow the leader : documents on the leading edge of semantic change get more citations (2021) 0.01
    0.006995871 = product of:
      0.020987613 = sum of:
        0.020987613 = weight(_text_:of in 169) [ClassicSimilarity], result of:
          0.020987613 = score(doc=169,freq=20.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.27317715 = fieldWeight in 169, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=169)
      0.33333334 = coord(1/3)
    
    Abstract
    Diachronic word embeddings-vector representations of words over time-offer remarkable insights into the evolution of language and provide a tool for quantifying sociocultural change from text documents. Prior work has used such embeddings to identify shifts in the meaning of individual words. However, simply knowing that a word has changed in meaning is insufficient to identify the instances of word usage that convey the historical meaning or the newer meaning. In this study, we link diachronic word embeddings to documents, by situating those documents as leaders or laggards with respect to ongoing semantic changes. Specifically, we propose a novel method to quantify the degree of semantic progressiveness in each word usage, and then show how these usages can be aggregated to obtain scores for each document. We analyze two large collections of documents, representing legal opinions and scientific articles. Documents that are scored as semantically progressive receive a larger number of citations, indicating that they are especially influential. Our work thus provides a new technique for identifying lexical semantic leaders and demonstrates a new link between progressive use of language and influence in a citation network.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.4, S.478-492
  17. Tao, J.; Zhou, L.; Hickey, K.: Making sense of the black-boxes : toward interpretable text classification using deep learning models (2023) 0.01
    0.006995871 = product of:
      0.020987613 = sum of:
        0.020987613 = weight(_text_:of in 990) [ClassicSimilarity], result of:
          0.020987613 = score(doc=990,freq=20.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.27317715 = fieldWeight in 990, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=990)
      0.33333334 = coord(1/3)
    
    Abstract
    Text classification is a common task in data science. Despite the superior performances of deep learning based models in various text classification tasks, their black-box nature poses significant challenges for wide adoption. The knowledge-to-action framework emphasizes several principles concerning the application and use of knowledge, such as ease-of-use, customization, and feedback. With the guidance of the above principles and the properties of interpretable machine learning, we identify the design requirements for and propose an interpretable deep learning (IDeL) based framework for text classification models. IDeL comprises three main components: feature penetration, instance aggregation, and feature perturbation. We evaluate our implementation of the framework with two distinct case studies: fake news detection and social question categorization. The experiment results provide evidence for the efficacy of IDeL components in enhancing the interpretability of text classification models. Moreover, the findings are generalizable across binary and multi-label, multi-class classification problems. The proposed IDeL framework introduce a unique iField perspective for building trusted models in data science by improving the transparency and access to advanced black-box models.
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.6, S.685-700
  18. Escolano, C.; Costa-Jussà, M.R.; Fonollosa, J.A.: From bilingual to multilingual neural-based machine translation by incremental training (2021) 0.01
    0.0066368664 = product of:
      0.019910598 = sum of:
        0.019910598 = weight(_text_:of in 97) [ClassicSimilarity], result of:
          0.019910598 = score(doc=97,freq=18.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.25915858 = fieldWeight in 97, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=97)
      0.33333334 = coord(1/3)
    
    Abstract
    A common intermediate language representation in neural machine translation can be used to extend bilingual systems by incremental training. We propose a new architecture based on introducing an interlingual loss as an additional training objective. By adding and forcing this interlingual loss, we can train multiple encoders and decoders for each language, sharing among them a common intermediate representation. Translation results on the low-resource tasks (Turkish-English and Kazakh-English tasks) show a BLEU improvement of up to 2.8 points. However, results on a larger dataset (Russian-English and Kazakh-English) show BLEU losses of a similar amount. While our system provides improvements only for the low-resource tasks in terms of translation quality, our system is capable of quickly deploying new language pairs without the need to retrain the rest of the system, which may be a game changer in some situations. Specifically, what is most relevant regarding our architecture is that it is capable of: reducing the number of production systems, with respect to the number of languages, from quadratic to linear; incrementally adding a new language to the system without retraining the languages already there; and allowing for translations from the new language to all the others present in the system.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.2, S.190-203
  19. Azpiazu, I.M.; Soledad Pera, M.: Is cross-lingual readability assessment possible? (2020) 0.01
    0.0066221016 = product of:
      0.019866304 = sum of:
        0.019866304 = weight(_text_:of in 5868) [ClassicSimilarity], result of:
          0.019866304 = score(doc=5868,freq=28.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.25858206 = fieldWeight in 5868, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=5868)
      0.33333334 = coord(1/3)
    
    Abstract
    Most research efforts related to automatic readability assessment focus on the design of strategies that apply to a specific language. These state-of-the-art strategies are highly dependent on linguistic features that best suit the language for which they were intended, constraining their adaptability and making it difficult to determine whether they would remain effective if they were applied to estimate the level of difficulty of texts in other languages. In this article, we present the results of a study designed to determine the feasibility of a cross-lingual readability assessment strategy. For doing so, we first analyzed the most common features used for readability assessment and determined their influence on the readability prediction process of 6 different languages: English, Spanish, Basque, Italian, French, and Catalan. In addition, we developed a cross-lingual readability assessment strategy that serves as a means to empirically explore the potential advantages of employing a single strategy (and set of features) for readability assessment in different languages, including interlanguage prediction agreement and prediction accuracy improvement for low-resource languages.Friend request acceptance and information disclosure constitute 2 important privacy decisions for users to control the flow of their personal information in social network sites (SNSs). These decisions are greatly influenced by contextual characteristics of the request. However, the contextual influence may not be uniform among users with different levels of privacy concerns. In this study, we hypothesize that users with higher privacy concerns may consider contextual factors differently from those with lower privacy concerns. By conducting a scenario-based survey study and structural equation modeling, we verify the interaction effects between privacy concerns and contextual factors. We additionally find that users' perceived risk towards the requester mediates the effect of context and privacy concerns. These results extend our understanding about the cognitive process behind privacy decision making in SNSs. The interaction effects suggest strategies for SNS providers to predict user's friend request acceptance and to customize context-aware privacy decision support based on users' different privacy attitudes.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.6, S.644-656
  20. Corbara, S.; Moreo, A.; Sebastiani, F.: Syllabic quantity patterns as rhythmic features for Latin authorship attribution (2023) 0.01
    0.0065027745 = product of:
      0.019508323 = sum of:
        0.019508323 = weight(_text_:of in 846) [ClassicSimilarity], result of:
          0.019508323 = score(doc=846,freq=12.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.25392252 = fieldWeight in 846, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=846)
      0.33333334 = coord(1/3)
    
    Abstract
    It is well known that, within the Latin production of written text, peculiar metric schemes were followed not only in poetic compositions, but also in many prose works. Such metric patterns were based on so-called syllabic quantity, that is, on the length of the involved syllables, and there is substantial evidence suggesting that certain authors had a preference for certain metric patterns over others. In this research we investigate the possibility to employ syllabic quantity as a base for deriving rhythmic features for the task of computational authorship attribution of Latin prose texts. We test the impact of these features on the authorship attribution task when combined with other topic-agnostic features. Our experiments, carried out on three different datasets using support vector machines (SVMs) show that rhythmic features based on syllabic quantity are beneficial in discriminating among Latin prose authors.
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.1, S.128-141

Languages

  • e 28
  • d 4

Types