Search (135 results, page 2 of 7)

  • × language_ss:"e"
  • × type_ss:"a"
  • × year_i:[2020 TO 2030}
  1. Manley, S.: Letters to the editor and the race for publication metrics (2022) 0.01
    0.010773714 = product of:
      0.021547427 = sum of:
        0.021547427 = product of:
          0.043094855 = sum of:
            0.043094855 = weight(_text_:22 in 547) [ClassicSimilarity], result of:
              0.043094855 = score(doc=547,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.2708308 = fieldWeight in 547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=547)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 4.2022 19:22:26
  2. Wu, P.F.: Veni, vidi, vici? : On the rise of scrape-and-report scholarship in online reviews research (2023) 0.01
    0.010773714 = product of:
      0.021547427 = sum of:
        0.021547427 = product of:
          0.043094855 = sum of:
            0.043094855 = weight(_text_:22 in 896) [ClassicSimilarity], result of:
              0.043094855 = score(doc=896,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.2708308 = fieldWeight in 896, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=896)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2023 18:33:53
  3. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.01
    0.010773714 = product of:
      0.021547427 = sum of:
        0.021547427 = product of:
          0.043094855 = sum of:
            0.043094855 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
              0.043094855 = score(doc=997,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.2708308 = fieldWeight in 997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=997)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2023 18:23:31
  4. Lee, Y.-Y.; Ke, H.; Yen, T.-Y.; Huang, H.-H.; Chen, H.-H.: Combining and learning word embedding with WordNet for semantic relatedness and similarity measurement (2020) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 5871) [ClassicSimilarity], result of:
              0.042851865 = score(doc=5871,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 5871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5871)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this research, we propose 3 different approaches to measure the semantic relatedness between 2 words: (i) boost the performance of GloVe word embedding model via removing or transforming abnormal dimensions; (ii) linearly combine the information extracted from WordNet and word embeddings; and (iii) utilize word embedding and 12 linguistic information extracted from WordNet as features for Support Vector Regression. We conducted our experiments on 8 benchmark data sets, and computed Spearman correlations between the outputs of our methods and the ground truth. We report our results together with 3 state-of-the-art approaches. The experimental results show that our method can outperform state-of-the-art approaches in all the selected English benchmark data sets.
  5. Tharani, K.: Just KOS! : enriching digital collections with hypertexts to enhance accessibility of non-western knowledge materials in libraries (2020) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 5896) [ClassicSimilarity], result of:
              0.042851865 = score(doc=5896,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 5896, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5896)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The knowledge organization systems (KOS) in use at libraries are social constructs that were conceived in the Euro-American context to organize and retrieve Western knowledge materials. As social constructs of the West, the effectiveness of library KOSs is limited when it comes to organization and retrieval of non-Western knowledge materials. How can librarians respond if asked to make non-Western knowledge materials as accessible as Western materials in their libraries? The accessibility of Western and non-Western knowledge materials in libraries need not be an either-or proposition. By way of a case study, a practical way forward is presented by which librarians can use their professional agency and existing digital technologies to exercise social justice. More specifically I demonstrate the design and development of a specialized KOS that enriches digital collections with hypertext features to enhance the accessibility of non-Western knowledge materials in libraries.
  6. Day, R.E.: Occupational classes, information technologies and the wage (2020) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 5932) [ClassicSimilarity], result of:
              0.042851865 = score(doc=5932,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 5932, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5932)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Occupational classifications mix epistemic and social notions of class in interesting ways that show not only the descriptive but also the prescriptive uses of documentality. In this paper, I would like to discuss how occupational classes have shifted from being a priori to being a posteriori documentary devices for both describing and prescribing labor. Post-coordinate indexing and algorithmic documentary systems must be viewed within post-Fordist constructions of identity and capitalism's construction of social sense by the wage if we are to have a better understanding of digital labor. In post-Fordist environments, documentation and its information technologies are not simply descriptive tools but are at the center of struggles of capital's prescription and direction of labor. Just like earlier documentary devices but even more prescriptively and socially internalized, information technology is not just a tool for users but rather is a device in the construction of such users and what they use (and are used by) at the level of their very being.
  7. Sanfilippo, M.R.; Shvartzshnaider, Y.; Reyes, I.; Nissenbaum, H.; Egelman, S.: Disaster privacy/privacy disaster (2020) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 5960) [ClassicSimilarity], result of:
              0.042851865 = score(doc=5960,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 5960, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5960)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Eadon, Y.M.: ¬(Not) part of the system : resolving epistemic disconnect through archival reference (2020) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 23) [ClassicSimilarity], result of:
              0.042851865 = score(doc=23,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 23, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=23)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information seeking practices of conspiracists are examined by introducing the new archival user group of "conspiracist researchers." The epistemic commitments of archival knowledge organization (AKO), rooted in provenance and access/secrecy, fundamentally conflict with the epistemic features of conspiracism, namely: mistrust of authority figures and institutions, accompanying overreliance on firsthand inquiry, and a tendency towards indicative mood/confirmation bias. Through interviews with reference personnel working at two state archives in the American west, I illustrate that the reference interaction is a vital turning point for the conspiracist researcher. Reference personnel can build trust with conspiracist researchers by displaying epistemic empathy and subverting hegemonic archival logics. The burden of bridging the epistemic gap through archival user education thus falls almost exclusively onto reference personnel. Domain analysis is presented as one possible starting point for developing an archival knowledge organization system (AKOS) that could be more epistemically flexible.
  9. Oduntan, O.; Ruthven, I.: People and places : bridging the information gaps in refugee integration (2021) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 66) [ClassicSimilarity], result of:
              0.042851865 = score(doc=66,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 66, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=66)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Gruda, D.; Karanatsiou, D.; Mendhekar, K.; Golbeck, J.; Vakali, A.: I alone can fix it : examining interactions between narcissistic leaders and anxious followers on Twitter using a machine learning approach (2021) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 408) [ClassicSimilarity], result of:
              0.042851865 = score(doc=408,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=408)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Huvila, I.: Making and taking information (2022) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 527) [ClassicSimilarity], result of:
              0.042851865 = score(doc=527,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 527, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=527)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Lowe, D.B.; Dollinger, I.; Koster, T.; Herbert, B.E.: Text mining for type of research classification (2021) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 720) [ClassicSimilarity], result of:
              0.042851865 = score(doc=720,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 720, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=720)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Marques Redigolo, F.; Lopes Fujita, M.S.; Gil-Leiva, I.: Guidelines for subject analysis in subject cataloging (2022) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 736) [ClassicSimilarity], result of:
              0.042851865 = score(doc=736,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 736, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=736)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Slota, S.C.; Fleischmann, K.R.; Lee, M.K.; Greenberg, S.R.; Nigam, I.; Zimmerman, T.; Rodriguez, S.; Snow, J.: ¬A feeling for the data : how government and nonprofit stakeholders negotiate value conflicts in data science approaches to ending homelessness (2023) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 969) [ClassicSimilarity], result of:
              0.042851865 = score(doc=969,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 969, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=969)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Berg, A.; Nelimarkka, M.: Do you see what I see? : measuring the semantic differences in image-recognition services' outputs (2023) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 1070) [ClassicSimilarity], result of:
              0.042851865 = score(doc=1070,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 1070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1070)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Higgins, C.: 'I coulda had class' : the difficulties of classifying film in Library of Congress Classification and Dewey Decimal Classification (2022) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 1095) [ClassicSimilarity], result of:
              0.042851865 = score(doc=1095,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 1095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1095)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.01
    0.010100282 = product of:
      0.020200564 = sum of:
        0.020200564 = product of:
          0.040401127 = sum of:
            0.040401127 = weight(_text_:i in 1004) [ClassicSimilarity], result of:
              0.040401127 = score(doc=1004,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.2357331 = fieldWeight in 1004, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1004)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
  18. Geras, A.; Siudem, G.; Gagolewski, M.: Should we introduce a dislike button for academic articles? (2020) 0.01
    0.009234612 = product of:
      0.018469224 = sum of:
        0.018469224 = product of:
          0.036938448 = sum of:
            0.036938448 = weight(_text_:22 in 5620) [ClassicSimilarity], result of:
              0.036938448 = score(doc=5620,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.23214069 = fieldWeight in 5620, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5620)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 1.2020 18:10:22
  19. Bullard, J.; Dierking, A.; Grundner, A.: Centring LGBT2QIA+ subjects in knowledge organization systems (2020) 0.01
    0.009234612 = product of:
      0.018469224 = sum of:
        0.018469224 = product of:
          0.036938448 = sum of:
            0.036938448 = weight(_text_:22 in 5996) [ClassicSimilarity], result of:
              0.036938448 = score(doc=5996,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.23214069 = fieldWeight in 5996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5996)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6.10.2020 21:22:33
  20. Lorentzen, D.G.: Bridging polarised Twitter discussions : the interactions of the users in the middle (2021) 0.01
    0.009234612 = product of:
      0.018469224 = sum of:
        0.018469224 = product of:
          0.036938448 = sum of:
            0.036938448 = weight(_text_:22 in 182) [ClassicSimilarity], result of:
              0.036938448 = score(doc=182,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.23214069 = fieldWeight in 182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=182)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22

Types