Search (23 results, page 1 of 2)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  • × year_i:[2020 TO 2030}
  1. Morris, V.: Automated language identification of bibliographic resources (2020) 0.03
    0.028059505 = product of:
      0.042089257 = sum of:
        0.014078482 = weight(_text_:information in 5749) [ClassicSimilarity], result of:
          0.014078482 = score(doc=5749,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.1551638 = fieldWeight in 5749, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.028010774 = product of:
          0.05602155 = sum of:
            0.05602155 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.05602155 = score(doc=5749,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.027363513 = product of:
      0.082090534 = sum of:
        0.082090534 = product of:
          0.2462716 = sum of:
            0.2462716 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.2462716 = score(doc=862,freq=2.0), product of:
                0.43819162 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05168566 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. ¬Der Student aus dem Computer (2023) 0.02
    0.016339619 = product of:
      0.049018852 = sum of:
        0.049018852 = product of:
          0.098037705 = sum of:
            0.098037705 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.098037705 = score(doc=1079,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 1.2023 16:22:55
  4. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.01
    0.009336925 = product of:
      0.028010774 = sum of:
        0.028010774 = product of:
          0.05602155 = sum of:
            0.05602155 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
              0.05602155 = score(doc=835,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.30952093 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    29.12.2022 18:22:55
  5. Rieger, F.: Lügende Computer (2023) 0.01
    0.009336925 = product of:
      0.028010774 = sum of:
        0.028010774 = product of:
          0.05602155 = sum of:
            0.05602155 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
              0.05602155 = score(doc=912,freq=2.0), product of:
                0.18099438 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05168566 = queryNorm
                0.30952093 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    16. 3.2023 19:22:55
  6. Zaitseva, E.M.: Developing linguistic tools of thematic search in library information systems (2023) 0.01
    0.008295825 = product of:
      0.024887474 = sum of:
        0.024887474 = weight(_text_:information in 1187) [ClassicSimilarity], result of:
          0.024887474 = score(doc=1187,freq=16.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.27429342 = fieldWeight in 1187, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1187)
      0.33333334 = coord(1/3)
    
    Abstract
    Within the R&D program "Information support of research by scientists and specialists on the basis of RNPLS&T Open Archive - the system of scientific knowledge aggregation", the RNPLS&T analyzes the use of linguistic tools of thematic search in the modern library information systems and the prospects for their development. The author defines the key common characteristics of e-catalogs of the largest Russian libraries revealed at the first stage of the analysis. Based on the specified common characteristics and detailed comparison analysis, the author outlines and substantiates the vectors for enhancing search inter faces of e-catalogs. The focus is made on linguistic tools of thematic search in library information systems; the key vectors are suggested: use of thematic search at different search levels with the clear-cut level differentiation; use of combined functionality within thematic search system; implementation of classification search in all e-catalogs; hierarchical representation of classifications; use of the matching systems for classification information retrieval languages, and in the long term classification and verbal information retrieval languages, and various verbal information retrieval languages. The author formulates practical recommendations to improve thematic search in library information systems.
  7. Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021) 0.01
    0.008212449 = product of:
      0.024637345 = sum of:
        0.024637345 = weight(_text_:information in 667) [ClassicSimilarity], result of:
          0.024637345 = score(doc=667,freq=8.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.27153665 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
      0.33333334 = coord(1/3)
    
    Abstract
    We present an overview of the NTCIR Math Tasks organized during NTCIR-10, 11, and 12. These tasks are primarily dedicated to techniques for searching mathematical content with formula expressions. In this chapter, we first summarize the task design and introduce test collections generated in the tasks. We also describe the features and main challenges of mathematical information retrieval systems and discuss future perspectives in the field.
    Series
    ¬The Information retrieval series, vol 43
    Source
    Evaluating information retrieval and access tasks. Eds.: Sakai, T., Oard, D., Kando, N. [https://doi.org/10.1007/978-981-15-5554-1_12]
  8. Xiang, R.; Chersoni, E.; Lu, Q.; Huang, C.-R.; Li, W.; Long, Y.: Lexical data augmentation for sentiment analysis (2021) 0.00
    0.0041479124 = product of:
      0.012443737 = sum of:
        0.012443737 = weight(_text_:information in 392) [ClassicSimilarity], result of:
          0.012443737 = score(doc=392,freq=4.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.13714671 = fieldWeight in 392, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=392)
      0.33333334 = coord(1/3)
    
    Abstract
    Machine learning methods, especially deep learning models, have achieved impressive performance in various natural language processing tasks including sentiment analysis. However, deep learning models are more demanding for training data. Data augmentation techniques are widely used to generate new instances based on modifications to existing data or relying on external knowledge bases to address annotated data scarcity, which hinders the full potential of machine learning techniques. This paper presents our work using part-of-speech (POS) focused lexical substitution for data augmentation (PLSDA) to enhance the performance of machine learning algorithms in sentiment analysis. We exploit POS information to identify words to be replaced and investigate different augmentation strategies to find semantically related substitutions when generating new instances. The choice of POS tags as well as a variety of strategies such as semantic-based substitution methods and sampling methods are discussed in detail. Performance evaluation focuses on the comparison between PLSDA and two previous lexical substitution-based data augmentation methods, one of which is thesaurus-based, and the other is lexicon manipulation based. Our approach is tested on five English sentiment analysis benchmarks: SST-2, MR, IMDB, Twitter, and AirRecord. Hyperparameters such as the candidate similarity threshold and number of newly generated instances are optimized. Results show that six classifiers (SVM, LSTM, BiLSTM-AT, bidirectional encoder representations from transformers [BERT], XLNet, and RoBERTa) trained with PLSDA achieve accuracy improvement of more than 0.6% comparing to two previous lexical substitution methods averaged on five benchmarks. Introducing POS constraint and well-designed augmentation strategies can improve the reliability of lexical data augmentation methods. Consequently, PLSDA significantly improves the performance of sentiment analysis algorithms.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.11, S.1432-1447
  9. Andrushchenko, M.; Sandberg, K.; Turunen, R.; Marjanen, J.; Hatavara, M.; Kurunmäki, J.; Nummenmaa, T.; Hyvärinen, M.; Teräs, K.; Peltonen, J.; Nummenmaa, J.: Using parsed and annotated corpora to analyze parliamentarians' talk in Finland (2022) 0.00
    0.0041479124 = product of:
      0.012443737 = sum of:
        0.012443737 = weight(_text_:information in 471) [ClassicSimilarity], result of:
          0.012443737 = score(doc=471,freq=4.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.13714671 = fieldWeight in 471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=471)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a search system for grammatically analyzed corpora of Finnish parliamentary records and interviews with former parliamentarians, annotated with metadata of talk structure and involved parliamentarians, and discuss their use through carefully chosen digital humanities case studies. We first introduce the construction, contents, and principles of use of the corpora. Then we discuss the application of the search system and the corpora to study how politicians talk about power, how ideological terms are used in political speech, and how to identify narratives in the data. All case studies stem from questions in the humanities and the social sciences, but rely on the grammatically parsed corpora in both identifying and quantifying passages of interest. Finally, the paper discusses the role of natural language processing methods for questions in the (digital) humanities. It makes the claim that a digital humanities inquiry of parliamentary speech and interviews with politicians cannot only rely on computational humanities modeling, but needs to accommodate a range of perspectives starting with simple searches, quantitative exploration, and ending with modeling. Furthermore, the digital humanities need a more thorough discussion about how the utilization of tools from information science and technologies alter the research questions posed in the humanities.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.2, S.288-302
  10. Suissa, O.; Elmalech, A.; Zhitomirsky-Geffet, M.: Text analysis using deep neural networks in digital humanities and information science (2022) 0.00
    0.0041479124 = product of:
      0.012443737 = sum of:
        0.012443737 = weight(_text_:information in 491) [ClassicSimilarity], result of:
          0.012443737 = score(doc=491,freq=4.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.13714671 = fieldWeight in 491, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=491)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.2, S.268-287
  11. Schaer, P.: Sprachmodelle und neuronale Netze im Information Retrieval (2023) 0.00
    0.0041479124 = product of:
      0.012443737 = sum of:
        0.012443737 = weight(_text_:information in 799) [ClassicSimilarity], result of:
          0.012443737 = score(doc=799,freq=4.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.13714671 = fieldWeight in 799, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=799)
      0.33333334 = coord(1/3)
    
    Abstract
    In den letzten Jahren haben Sprachmodelltechnologien unterschiedlichster Ausprägungen in der Informationswissenschaft Einzug gehalten. Diesen Sprachmodellen, die unter den Bezeichnungen GPT, ELMo oder BERT bekannt sind, ist gemein, dass sie dank sehr großer Webkorpora auf eine Datenbasis zurückgreifen, die bei vorherigen Sprachmodellansätzen undenkbar war. Gleichzeitig setzen diese Modelle auf neuere Entwicklungen des maschinellen Lernens, insbesondere auf künstliche neuronale Netze. Diese Technologien haben auch im Information Retrieval (IR) Fuß gefasst und bereits kurz nach ihrer Einführung sprunghafte, substantielle Leistungssteigerungen erzielt. Neuronale Netze haben in Kombination mit großen vortrainierten Sprachmodellen und kontextualisierten Worteinbettungen geführt. Wurde in vergangenen Jahren immer wieder eine stagnierende Retrievalleistung beklagt, die Leistungssteigerungen nur gegenüber "schwachen Baselines" aufwies, so konnten mit diesen technischen und methodischen Innovationen beeindruckende Leistungssteigerungen in Aufgaben wie dem klassischen Ad-hoc-Retrieval, der maschinellen Übersetzung oder auch dem Question Answering erzielt werden. In diesem Kapitel soll ein kurzer Überblick über die Grundlagen der Sprachmodelle und der NN gegeben werden, um die prinzipiellen Bausteine zu verstehen, die hinter aktuellen Technologien wie ELMo oder BERT stecken, die die Welt des NLP und IR im Moment beherrschen.
  12. Ali, C.B.; Haddad, H.; Slimani, Y.: Multi-word terms selection for information retrieval (2022) 0.00
    0.0041479124 = product of:
      0.012443737 = sum of:
        0.012443737 = weight(_text_:information in 900) [ClassicSimilarity], result of:
          0.012443737 = score(doc=900,freq=4.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.13714671 = fieldWeight in 900, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=900)
      0.33333334 = coord(1/3)
    
    Source
    Information discovery and delivery 51(2022) no.1, S.xx-xx
  13. Azpiazu, I.M.; Soledad Pera, M.: Is cross-lingual readability assessment possible? (2020) 0.00
    0.0040641082 = product of:
      0.012192324 = sum of:
        0.012192324 = weight(_text_:information in 5868) [ClassicSimilarity], result of:
          0.012192324 = score(doc=5868,freq=6.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.1343758 = fieldWeight in 5868, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=5868)
      0.33333334 = coord(1/3)
    
    Abstract
    Most research efforts related to automatic readability assessment focus on the design of strategies that apply to a specific language. These state-of-the-art strategies are highly dependent on linguistic features that best suit the language for which they were intended, constraining their adaptability and making it difficult to determine whether they would remain effective if they were applied to estimate the level of difficulty of texts in other languages. In this article, we present the results of a study designed to determine the feasibility of a cross-lingual readability assessment strategy. For doing so, we first analyzed the most common features used for readability assessment and determined their influence on the readability prediction process of 6 different languages: English, Spanish, Basque, Italian, French, and Catalan. In addition, we developed a cross-lingual readability assessment strategy that serves as a means to empirically explore the potential advantages of employing a single strategy (and set of features) for readability assessment in different languages, including interlanguage prediction agreement and prediction accuracy improvement for low-resource languages.Friend request acceptance and information disclosure constitute 2 important privacy decisions for users to control the flow of their personal information in social network sites (SNSs). These decisions are greatly influenced by contextual characteristics of the request. However, the contextual influence may not be uniform among users with different levels of privacy concerns. In this study, we hypothesize that users with higher privacy concerns may consider contextual factors differently from those with lower privacy concerns. By conducting a scenario-based survey study and structural equation modeling, we verify the interaction effects between privacy concerns and contextual factors. We additionally find that users' perceived risk towards the requester mediates the effect of context and privacy concerns. These results extend our understanding about the cognitive process behind privacy decision making in SNSs. The interaction effects suggest strategies for SNS providers to predict user's friend request acceptance and to customize context-aware privacy decision support based on users' different privacy attitudes.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.6, S.644-656
  14. Corbara, S.; Moreo, A.; Sebastiani, F.: Syllabic quantity patterns as rhythmic features for Latin authorship attribution (2023) 0.00
    0.0035196205 = product of:
      0.010558861 = sum of:
        0.010558861 = weight(_text_:information in 846) [ClassicSimilarity], result of:
          0.010558861 = score(doc=846,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.116372846 = fieldWeight in 846, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=846)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.1, S.128-141
  15. Lund, B.D.; Wang, T.; Mannuru, N.R.; Nie, B.; Shimray, S.; Wang, Z.: ChatGPT and a new academic reality : artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing (2023) 0.00
    0.0035196205 = product of:
      0.010558861 = sum of:
        0.010558861 = weight(_text_:information in 943) [ClassicSimilarity], result of:
          0.010558861 = score(doc=943,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.116372846 = fieldWeight in 943, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=943)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.5, S.570-581
  16. Meng, K.; Ba, Z.; Ma, Y.; Li, G.: ¬A network coupling approach to detecting hierarchical linkages between science and technology (2024) 0.00
    0.0035196205 = product of:
      0.010558861 = sum of:
        0.010558861 = weight(_text_:information in 1205) [ClassicSimilarity], result of:
          0.010558861 = score(doc=1205,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.116372846 = fieldWeight in 1205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1205)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 75(2023) no.2, S.167-187
  17. Zhang, Y.; Zhang, C.; Li, J.: Joint modeling of characters, words, and conversation contexts for microblog keyphrase extraction (2020) 0.00
    0.002933017 = product of:
      0.008799051 = sum of:
        0.008799051 = weight(_text_:information in 5816) [ClassicSimilarity], result of:
          0.008799051 = score(doc=5816,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.09697737 = fieldWeight in 5816, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5816)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.5, S.553-567
  18. Geißler, S.: Natürliche Sprachverarbeitung und Künstliche Intelligenz : ein wachsender Markt mit vielen Chancen. Das Beispiel Kairntech (2020) 0.00
    0.002933017 = product of:
      0.008799051 = sum of:
        0.008799051 = weight(_text_:information in 5924) [ClassicSimilarity], result of:
          0.008799051 = score(doc=5924,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.09697737 = fieldWeight in 5924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5924)
      0.33333334 = coord(1/3)
    
    Source
    Information - Wissenschaft und Praxis. 71(2020) H.2/3, S.95-106
  19. Escolano, C.; Costa-Jussà, M.R.; Fonollosa, J.A.: From bilingual to multilingual neural-based machine translation by incremental training (2021) 0.00
    0.002933017 = product of:
      0.008799051 = sum of:
        0.008799051 = weight(_text_:information in 97) [ClassicSimilarity], result of:
          0.008799051 = score(doc=97,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.09697737 = fieldWeight in 97, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=97)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.2, S.190-203
  20. Lee, G.E.; Sun, A.: Understanding the stability of medical concept embeddings (2021) 0.00
    0.002933017 = product of:
      0.008799051 = sum of:
        0.008799051 = weight(_text_:information in 159) [ClassicSimilarity], result of:
          0.008799051 = score(doc=159,freq=2.0), product of:
            0.09073304 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.05168566 = queryNorm
            0.09697737 = fieldWeight in 159, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=159)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.3, S.346-356