Search (129 results, page 1 of 7)

  • × year_i:[2020 TO 2030}
  1. Wu, P.F.: Veni, vidi, vici? : On the rise of scrape-and-report scholarship in online reviews research (2023) 0.09
    0.08768053 = product of:
      0.17536107 = sum of:
        0.17536107 = sum of:
          0.12673281 = weight(_text_:report in 896) [ClassicSimilarity], result of:
            0.12673281 = score(doc=896,freq=4.0), product of:
              0.24374367 = queryWeight, product of:
                4.7537646 = idf(docFreq=1035, maxDocs=44218)
                0.05127382 = queryNorm
              0.519943 = fieldWeight in 896, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.7537646 = idf(docFreq=1035, maxDocs=44218)
                0.0546875 = fieldNorm(doc=896)
          0.048628263 = weight(_text_:22 in 896) [ClassicSimilarity], result of:
            0.048628263 = score(doc=896,freq=2.0), product of:
              0.17955218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05127382 = queryNorm
              0.2708308 = fieldWeight in 896, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=896)
      0.5 = coord(1/2)
    
    Abstract
    JASIST has in recent years received many submissions reporting data analytics based on "Big Data" of online reviews scraped from various platforms. By outlining major issues in this type of scape-and-report scholarship and providing a set of recommendations, this essay encourages online reviews researchers to look at Big Data with a critical eye and treat online reviews as a sociotechnical "thing" produced within the fabric of sociomaterial life.
    Date
    22. 1.2023 18:33:53
  2. Bullard, J.; Dierking, A.; Grundner, A.: Centring LGBT2QIA+ subjects in knowledge organization systems (2020) 0.06
    0.059246525 = product of:
      0.11849305 = sum of:
        0.11849305 = sum of:
          0.076811686 = weight(_text_:report in 5996) [ClassicSimilarity], result of:
            0.076811686 = score(doc=5996,freq=2.0), product of:
              0.24374367 = queryWeight, product of:
                4.7537646 = idf(docFreq=1035, maxDocs=44218)
                0.05127382 = queryNorm
              0.31513304 = fieldWeight in 5996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.7537646 = idf(docFreq=1035, maxDocs=44218)
                0.046875 = fieldNorm(doc=5996)
          0.041681368 = weight(_text_:22 in 5996) [ClassicSimilarity], result of:
            0.041681368 = score(doc=5996,freq=2.0), product of:
              0.17955218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05127382 = queryNorm
              0.23214069 = fieldWeight in 5996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5996)
      0.5 = coord(1/2)
    
    Abstract
    This paper contains a report of two interdependent knowledge organization (KO) projects for an LGBT2QIA+ library. The authors, in the context of volunteer library work for an independent library, redesigned the classification system and subject cataloguing guidelines to centre LGBT2QIA+ subjects. We discuss the priorities of creating and maintaining knowledge organization systems for a historically marginalized community and address the challenge that queer subjectivity poses to the goals of KO. The classification system features a focus on identity and physically reorganizes the library space in a way that accounts for the multiple and overlapping labels that constitute the currently articulated boundaries of this community. The subject heading system focuses on making visible topics and elements of identity made invisible by universal systems and by the newly implemented classification system. We discuss how this project may inform KO for other marginalized subjects, particularly through process and documentation that prioritizes transparency and the acceptance of an unfinished endpoint for queer KO.
    Date
    6.10.2020 21:22:33
  3. Wang, S.; Ma, Y.; Mao, J.; Bai, Y.; Liang, Z.; Li, G.: Quantifying scientific breakthroughs by a novel disruption indicator based on knowledge entities : On the rise of scrape-and-report scholarship in online reviews research (2023) 0.05
    0.049372107 = product of:
      0.09874421 = sum of:
        0.09874421 = sum of:
          0.06400974 = weight(_text_:report in 882) [ClassicSimilarity], result of:
            0.06400974 = score(doc=882,freq=2.0), product of:
              0.24374367 = queryWeight, product of:
                4.7537646 = idf(docFreq=1035, maxDocs=44218)
                0.05127382 = queryNorm
              0.26261088 = fieldWeight in 882, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.7537646 = idf(docFreq=1035, maxDocs=44218)
                0.0390625 = fieldNorm(doc=882)
          0.034734476 = weight(_text_:22 in 882) [ClassicSimilarity], result of:
            0.034734476 = score(doc=882,freq=2.0), product of:
              0.17955218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05127382 = queryNorm
              0.19345059 = fieldWeight in 882, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=882)
      0.5 = coord(1/2)
    
    Date
    22. 1.2023 18:37:33
  4. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.04071821 = product of:
      0.08143642 = sum of:
        0.08143642 = product of:
          0.24430925 = sum of:
            0.24430925 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24430925 = score(doc=862,freq=2.0), product of:
                0.4347 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05127382 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  5. Boczkowski, P.; Mitchelstein, E.: ¬The digital environment : How we live, learn, work, and play now (2021) 0.04
    0.039497685 = product of:
      0.07899537 = sum of:
        0.07899537 = sum of:
          0.051207792 = weight(_text_:report in 1003) [ClassicSimilarity], result of:
            0.051207792 = score(doc=1003,freq=2.0), product of:
              0.24374367 = queryWeight, product of:
                4.7537646 = idf(docFreq=1035, maxDocs=44218)
                0.05127382 = queryNorm
              0.2100887 = fieldWeight in 1003, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.7537646 = idf(docFreq=1035, maxDocs=44218)
                0.03125 = fieldNorm(doc=1003)
          0.02778758 = weight(_text_:22 in 1003) [ClassicSimilarity], result of:
            0.02778758 = score(doc=1003,freq=2.0), product of:
              0.17955218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05127382 = queryNorm
              0.15476047 = fieldWeight in 1003, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1003)
      0.5 = coord(1/2)
    
    Abstract
    Increasingly we live through our personal screens; we work, play, socialize, and learn digitally. The shift to remote everything during the pandemic was another step in a decades-long march toward the digitization of everyday life made possible by innovations in media, information, and communication technology. In The Digital Environment, Pablo Boczkowski and Eugenia Mitchelstein offer a new way to understand the role of the digital in our daily lives, calling on us to turn our attention from our discrete devices and apps to the array of artifacts and practices that make up the digital environment that envelops every aspect of our social experience. Boczkowski and Mitchelstein explore a series of issues raised by the digital takeover of everyday life, drawing on interviews with a variety of experts. They show how existing inequities of gender, race, ethnicity, education, and class are baked into the design and deployment of technology, and describe emancipatory practices that counter this--including the use of Twitter as a platform for activism through such hashtags as #BlackLivesMatter and #MeToo. They discuss the digitization of parenting, schooling, and dating--noting, among other things, that today we can both begin and end relationships online. They describe how digital media shape our consumption of sports, entertainment, and news, and consider the dynamics of political campaigns, disinformation, and social activism. Finally, they report on developments in three areas that will be key to our digital future: data science, virtual reality, and space exploration.
    Date
    22. 6.2023 18:25:18
  6. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.033931844 = product of:
      0.06786369 = sum of:
        0.06786369 = product of:
          0.20359105 = sum of:
            0.20359105 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.20359105 = score(doc=5669,freq=2.0), product of:
                0.4347 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05127382 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  7. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.033931844 = product of:
      0.06786369 = sum of:
        0.06786369 = product of:
          0.20359105 = sum of:
            0.20359105 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.20359105 = score(doc=1000,freq=2.0), product of:
                0.4347 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05127382 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  8. Rügenhagen, M.; Beck, T.S.; Sartorius, E.J.: Information integrity in the era of Fake News : ein neuer Studienschwerpunkt für wissenschaftliche Bibliotheken und Forschungseinrichtungen (2020) 0.03
    0.025603896 = product of:
      0.051207792 = sum of:
        0.051207792 = product of:
          0.102415584 = sum of:
            0.102415584 = weight(_text_:report in 5858) [ClassicSimilarity], result of:
              0.102415584 = score(doc=5858,freq=2.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.4201774 = fieldWeight in 5858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5858)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article we report on an experiment that tested how useful library-based guidelines are for measuring the integrity of information in the era of fake news. We found that the usefulness of these guidelines depends on at least three factors: weighting indicators (criteria), clear instructions, and context-specificity.
  9. Rügenhagen, M.; Beck, T.S.; Sartorius, E.J.: Information integrity in the era of Fake News : an experiment using library guidelines to judge information integrity (2020) 0.03
    0.025603896 = product of:
      0.051207792 = sum of:
        0.051207792 = product of:
          0.102415584 = sum of:
            0.102415584 = weight(_text_:report in 113) [ClassicSimilarity], result of:
              0.102415584 = score(doc=113,freq=2.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.4201774 = fieldWeight in 113, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.0625 = fieldNorm(doc=113)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article we report on an experiment that tested how useful library-based guidelines are for measuring the integrity of information in the era of fake news. We found that the usefulness of these guidelines depends on at least three factors: weighting indicators (criteria), clear instructions, and context-specificity.
  10. Harlan, E.; Köppen, U.; Schnuck, O.; Wreschniok, L.: Fragwürdige Personalauswahl mit Algorithmen : KI zur Persönlichkeitsanalyse (2021) 0.03
    0.025603896 = product of:
      0.051207792 = sum of:
        0.051207792 = product of:
          0.102415584 = sum of:
            0.102415584 = weight(_text_:report in 143) [ClassicSimilarity], result of:
              0.102415584 = score(doc=143,freq=2.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.4201774 = fieldWeight in 143, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.0625 = fieldNorm(doc=143)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    https://www.tagesschau.de/investigativ/report-muenchen/kuenstliche-intelligenz-persoenlichkeitsanalyse-101.html
  11. ¬Der Student aus dem Computer (2023) 0.02
    0.024314132 = product of:
      0.048628263 = sum of:
        0.048628263 = product of:
          0.09725653 = sum of:
            0.09725653 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09725653 = score(doc=1079,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  12. Stvilia, B.; Lee, D.J.; Han, N.-e.: "Striking out on your own" : a study of research information management problems on university campuses (2021) 0.02
    0.022630861 = product of:
      0.045261722 = sum of:
        0.045261722 = product of:
          0.090523444 = sum of:
            0.090523444 = weight(_text_:report in 309) [ClassicSimilarity], result of:
              0.090523444 = score(doc=309,freq=4.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.37138787 = fieldWeight in 309, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=309)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Here, we report on a qualitative study that examined research information management (RIM) ecosystems on research university campuses from the perspectives of research information (RI) managers and librarians. In the study, we identified 21 RIM services offered to researchers, ranging from discovering, storing, and sharing authored content to identifying expertise, recruiting faculty, and ensuring the diversity of committee assignments. In addition, we identified 15 types of RIM service provision and adoption problems, analyzed their activity structures, and connected them to strategies for their resolution. Finally, we report on skills that the study participants reported as being needed in their work. These findings can inform the development of best practice guides for RIM on university campuses. The study also advances the state of the art of RIM research by applying the typology of contradictions from activity theory to categorize the problems of RIM service provision and connect their resolution to theories and findings of prior studies in the literature. In this way, the research expands the theoretical base used to study RIM in general and RIM at research universities in particular.
  13. Chen, S.S.-J.: Methodological considerations for developing Art & Architecture Thesaurus in Chinese and its applications (2021) 0.02
    0.022630861 = product of:
      0.045261722 = sum of:
        0.045261722 = product of:
          0.090523444 = sum of:
            0.090523444 = weight(_text_:report in 579) [ClassicSimilarity], result of:
              0.090523444 = score(doc=579,freq=4.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.37138787 = fieldWeight in 579, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=579)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A multilingual thesaurus' development needs the appropriate methodological considerations not only for linguistics, but also cultural heterogeneity, as demonstrated in this report on the multilingual project of the Art & Architecture Thesaurus (AAT) in the Chinese language, which has been a collaboration between the Academia Sinica Center for Digital Culture and the Getty Research Institute for more than a decade. After a brief overview of the project, the paper will introduce a holistic methodology for considering how to enable Western art to be accessible to Chinese users and Chinese art accessible to Western users. The conceptual and structural issues will be discussed, especially the challenges of developing terminology in two different cultures. For instance, some terms shared by Western and Chinese cultures could be understood differently in each culture, which raises questions regarding their locations within the hierarchical structure of the AAT. Finally, the report will provide cases to demonstrate how the Chinese-Language AAT language supports online exhibitions, digital humanities and linking of digital art history content to the web of data.
  14. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.02
    0.020840684 = product of:
      0.041681368 = sum of:
        0.041681368 = product of:
          0.083362736 = sum of:
            0.083362736 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.083362736 = score(doc=4156,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 14:08:22
  15. Ibrahim, G.M.; Taylor, M.: Krebszellen manipulieren Neurone : Gliome (2023) 0.02
    0.020840684 = product of:
      0.041681368 = sum of:
        0.041681368 = product of:
          0.083362736 = sum of:
            0.083362736 = weight(_text_:22 in 1203) [ClassicSimilarity], result of:
              0.083362736 = score(doc=1203,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.46428138 = fieldWeight in 1203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1203)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2023, H.10, S.22-24
  16. Lee, Y.-Y.; Ke, H.; Yen, T.-Y.; Huang, H.-H.; Chen, H.-H.: Combining and learning word embedding with WordNet for semantic relatedness and similarity measurement (2020) 0.02
    0.019202922 = product of:
      0.038405843 = sum of:
        0.038405843 = product of:
          0.076811686 = sum of:
            0.076811686 = weight(_text_:report in 5871) [ClassicSimilarity], result of:
              0.076811686 = score(doc=5871,freq=2.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.31513304 = fieldWeight in 5871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5871)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this research, we propose 3 different approaches to measure the semantic relatedness between 2 words: (i) boost the performance of GloVe word embedding model via removing or transforming abnormal dimensions; (ii) linearly combine the information extracted from WordNet and word embeddings; and (iii) utilize word embedding and 12 linguistic information extracted from WordNet as features for Support Vector Regression. We conducted our experiments on 8 benchmark data sets, and computed Spearman correlations between the outputs of our methods and the ground truth. We report our results together with 3 state-of-the-art approaches. The experimental results show that our method can outperform state-of-the-art approaches in all the selected English benchmark data sets.
  17. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.02
    0.019202922 = product of:
      0.038405843 = sum of:
        0.038405843 = product of:
          0.076811686 = sum of:
            0.076811686 = weight(_text_:report in 39) [ClassicSimilarity], result of:
              0.076811686 = score(doc=39,freq=2.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.31513304 = fieldWeight in 39, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.046875 = fieldNorm(doc=39)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Digital cultural heritage resources are widely available on the web through the digital libraries of heritage institutions. To address the difficulties of discoverability in cultural heritage, the common practice is metadata aggregation, where centralized efforts like Europeana facilitate discoverability by collecting the resources' metadata. We present the results of the linked data aggregation task conducted within the Europeana Common Culture project, which attempted an innovative approach to aggregation based on linked data made available by cultural heritage institutions. This task ran for one year with participation of eleven organizations, involving the three member roles of the Europeana network: data providers, intermediary aggregators, and the central aggregation hub, Europeana. We report on the challenges that were faced by data providers, the standards and specifications applied, and the resulting aggregated metadata.
  18. Zhang, Y.; Ren, P.; Rijke, M. de: ¬A taxonomy, data set, and benchmark for detecting and classifying malevolent dialogue responses (2021) 0.02
    0.019202922 = product of:
      0.038405843 = sum of:
        0.038405843 = product of:
          0.076811686 = sum of:
            0.076811686 = weight(_text_:report in 356) [ClassicSimilarity], result of:
              0.076811686 = score(doc=356,freq=2.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.31513304 = fieldWeight in 356, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.046875 = fieldNorm(doc=356)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Conversational interfaces are increasingly popular as a way of connecting people to information. With the increased generative capacity of corpus-based conversational agents comes the need to classify and filter out malevolent responses that are inappropriate in terms of content and dialogue acts. Previous studies on the topic of detecting and classifying inappropriate content are mostly focused on a specific category of malevolence or on single sentences instead of an entire dialogue. We make three contributions to advance research on the malevolent dialogue response detection and classification (MDRDC) task. First, we define the task and present a hierarchical malevolent dialogue taxonomy. Second, we create a labeled multiturn dialogue data set and formulate the MDRDC task as a hierarchical classification task. Last, we apply state-of-the-art text classification methods to the MDRDC task, and report on experiments aimed at assessing the performance of these approaches.
  19. Ortega, J.L.: Classification and analysis of PubPeer comments : how a web journal club is used (2022) 0.02
    0.019202922 = product of:
      0.038405843 = sum of:
        0.038405843 = product of:
          0.076811686 = sum of:
            0.076811686 = weight(_text_:report in 544) [ClassicSimilarity], result of:
              0.076811686 = score(doc=544,freq=2.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.31513304 = fieldWeight in 544, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.046875 = fieldNorm(doc=544)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study explores the use of PubPeer by the scholarly community, to understand the issues discussed in an online journal club, the disciplines most commented on, and the characteristics of the most prolific users. A sample of 39,985 posts about 24,779 publications were extracted from PubPeer in 2019 and 2020. These comments were divided into seven categories according to their degree of seriousness (Positive review, Critical review, Lack of information, Honest errors, Methodological flaws, Publishing fraud, and Manipulation). The results show that more than two-thirds of comments are posted to report some type of misconduct, mainly about image manipulation. These comments generate most discussion and take longer to be posted. By discipline, Health Sciences and Life Sciences are the most discussed research areas. The results also reveal "super commenters," users who access the platform to systematically review publications. The study ends by discussing how various disciplines use the site for different purposes.
  20. Berg, A.; Nelimarkka, M.: Do you see what I see? : measuring the semantic differences in image-recognition services' outputs (2023) 0.02
    0.019202922 = product of:
      0.038405843 = sum of:
        0.038405843 = product of:
          0.076811686 = sum of:
            0.076811686 = weight(_text_:report in 1070) [ClassicSimilarity], result of:
              0.076811686 = score(doc=1070,freq=2.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.31513304 = fieldWeight in 1070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1070)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As scholars increasingly undertake large-scale analysis of visual materials, advanced computational tools show promise for informing that process. One technique in the toolbox is image recognition, made readily accessible via Google Vision AI, Microsoft Azure Computer Vision, and Amazon's Rekognition service. However, concerns about such issues as bias factors and low reliability have led to warnings against research employing it. A systematic study of cross-service label agreement concretized such issues: using eight datasets, spanning professionally produced and user-generated images, the work showed that image-recognition services disagree on the most suitable labels for images. Beyond supporting caveats expressed in prior literature, the report articulates two mitigation strategies, both involving the use of multiple image-recognition services: Highly explorative research could include all the labels, accepting noisier but less restrictive analysis output. Alternatively, scholars may employ word-embedding-based approaches to identify concepts that are similar enough for their purposes, then focus on those labels filtered in.

Languages

  • e 99
  • d 30

Types

  • a 120
  • el 23
  • m 3
  • p 3
  • x 1
  • More… Less…