Search (117 results, page 1 of 6)

  • × year_i:[2020 TO 2030}
  1. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.08
    0.08146443 = product of:
      0.16292886 = sum of:
        0.16292886 = sum of:
          0.114715785 = weight(_text_:ii in 40) [ClassicSimilarity], result of:
            0.114715785 = score(doc=40,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.41776034 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
          0.048213083 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
            0.048213083 = score(doc=40,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.2708308 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
      0.5 = coord(1/2)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
  2. Wu, Z.; Li, R.; Zhou, Z.; Guo, J.; Jiang, J.; Su, X.: ¬A user sensitive subject protection approach for book search service (2020) 0.06
    0.05818888 = product of:
      0.11637776 = sum of:
        0.11637776 = sum of:
          0.08193985 = weight(_text_:ii in 5617) [ClassicSimilarity], result of:
            0.08193985 = score(doc=5617,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.29840025 = fieldWeight in 5617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5617)
          0.034437917 = weight(_text_:22 in 5617) [ClassicSimilarity], result of:
            0.034437917 = score(doc=5617,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.19345059 = fieldWeight in 5617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5617)
      0.5 = coord(1/2)
    
    Abstract
    In a digital library, book search is one of the most important information services. However, with the rapid development of network technologies such as cloud computing, the server-side of a digital library is becoming more and more untrusted; thus, how to prevent the disclosure of users' book query privacy is causing people's increasingly extensive concern. In this article, we propose to construct a group of plausible fake queries for each user book query to cover up the sensitive subjects behind users' queries. First, we propose a basic framework for the privacy protection in book search, which requires no change to the book search algorithm running on the server-side, and no compromise to the accuracy of book search. Second, we present a privacy protection model for book search to formulate the constraints that ideal fake queries should satisfy, that is, (i) the feature similarity, which measures the confusion effect of fake queries on users' queries, and (ii) the privacy exposure, which measures the cover-up effect of fake queries on users' sensitive subjects. Third, we discuss the algorithm implementation for the privacy model. Finally, the effectiveness of our approach is demonstrated by theoretical analysis and experimental evaluation.
    Date
    6. 1.2020 17:22:25
  3. Jörs, B.: ¬Ein kleines Fach zwischen "Daten" und "Wissen" II : Anmerkungen zum (virtuellen) "16th International Symposium of Information Science" (ISI 2021", Regensburg) (2021) 0.06
    0.05818888 = product of:
      0.11637776 = sum of:
        0.11637776 = sum of:
          0.08193985 = weight(_text_:ii in 330) [ClassicSimilarity], result of:
            0.08193985 = score(doc=330,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.29840025 = fieldWeight in 330, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.0390625 = fieldNorm(doc=330)
          0.034437917 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
            0.034437917 = score(doc=330,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.19345059 = fieldWeight in 330, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=330)
      0.5 = coord(1/2)
    
    Abstract
    Nur noch Informationsethik, Informationskompetenz und Information Assessment? Doch gerade die Abschottung von anderen Disziplinen verstärkt die Isolation des "kleinen Faches" Informationswissenschaft in der Scientific Community. So bleiben ihr als letzte "eigenständige" Forschungsrandgebiete nur die, die Wolf Rauch als Keynote Speaker bereits in seinem einführenden, historisch-genetischen Vortrag zur Lage der Informationswissenschaft auf der ISI 2021 benannt hat: "Wenn die universitäre Informationswissenschaft (zumindest in Europa) wohl kaum eine Chance hat, im Bereich der Entwicklung von Systemen und Anwendungen wieder an die Spitze der Entwicklung vorzustoßen, bleiben ihr doch Gebiete, in denen ihr Beitrag in der kommenden Entwicklungsphase dringend erforderlich sein wird: Informationsethik, Informationskompetenz, Information Assessment" (Wolf Rauch: Was aus der Informationswissenschaft geworden ist; in: Thomas Schmidt; Christian Wolff (Eds): Information between Data and Knowledge. Schriften zur Informationswissenschaft 74, Regensburg, 2021, Seiten 20-22 - siehe auch die Rezeption des Beitrages von Rauch durch Johannes Elia Panskus, Was aus der Informationswissenschaft geworden ist. Sie ist in der Realität angekommen, in: Open Password, 17. März 2021). Das ist alles? Ernüchternd.
  4. Boczkowski, P.; Mitchelstein, E.: ¬The digital environment : How we live, learn, work, and play now (2021) 0.05
    0.046551105 = product of:
      0.09310221 = sum of:
        0.09310221 = sum of:
          0.06555188 = weight(_text_:ii in 1003) [ClassicSimilarity], result of:
            0.06555188 = score(doc=1003,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.2387202 = fieldWeight in 1003, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.03125 = fieldNorm(doc=1003)
          0.027550334 = weight(_text_:22 in 1003) [ClassicSimilarity], result of:
            0.027550334 = score(doc=1003,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.15476047 = fieldWeight in 1003, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1003)
      0.5 = coord(1/2)
    
    Content
    1. Three Environments, One Life -- Part I: Foundations -- 2. Mediatization -- 3. Algorithms -- 4. Race and Ethnicity -- 5. Gender -- Part II: Institutions -- 6. Parenting -- 7. Schooling -- 8. Working -- 9. Dating -- Part III: Leisure -- 10. Sports -- 11. Televised Entertainment -- 12. News -- Part IV: Politics -- 13. Misinformation and Disinformation -- 14. Electoral Campaigns -- 15. Activism -- Part V: Innovations -- 16. Data Science -- 17. Virtual Reality -- 18. Space Exploration -- 19. Bricks and Cracks in the Digital Environment
    Date
    22. 6.2023 18:25:18
  5. Barth, T.: Inverse Panopticon : Digitalisierung & Transhumanismus [Transhumanismus II] (2020) 0.04
    0.040969923 = product of:
      0.08193985 = sum of:
        0.08193985 = product of:
          0.1638797 = sum of:
            0.1638797 = weight(_text_:ii in 5592) [ClassicSimilarity], result of:
              0.1638797 = score(doc=5592,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.5968005 = fieldWeight in 5592, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5592)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.04037056 = product of:
      0.08074112 = sum of:
        0.08074112 = product of:
          0.24222337 = sum of:
            0.24222337 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24222337 = score(doc=862,freq=2.0), product of:
                0.4309886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050836053 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  7. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.033642136 = product of:
      0.06728427 = sum of:
        0.06728427 = product of:
          0.20185281 = sum of:
            0.20185281 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.20185281 = score(doc=5669,freq=2.0), product of:
                0.4309886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050836053 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  8. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.033642136 = product of:
      0.06728427 = sum of:
        0.06728427 = product of:
          0.20185281 = sum of:
            0.20185281 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.20185281 = score(doc=1000,freq=2.0), product of:
                0.4309886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050836053 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  9. Barth, T.: Digitalisierung und Lobby : Transhumanismus I (2020) 0.03
    0.028678946 = product of:
      0.057357892 = sum of:
        0.057357892 = product of:
          0.114715785 = sum of:
            0.114715785 = weight(_text_:ii in 5665) [ClassicSimilarity], result of:
              0.114715785 = score(doc=5665,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.41776034 = fieldWeight in 5665, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5665)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vgl. die Fortsetzung: Barth, T.: Inverse Panopticon: Digitalisierung & Transhumanismus [Transhumanismus II]. [25. Januar 2020]. Unter: https://www.heise.de/tp/features/Inverse-Panopticon-Digitalisierung-Transhumanismus-4645668.html?seite=all.
  10. Lee, Y.-Y.; Ke, H.; Yen, T.-Y.; Huang, H.-H.; Chen, H.-H.: Combining and learning word embedding with WordNet for semantic relatedness and similarity measurement (2020) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 5871) [ClassicSimilarity], result of:
              0.098327816 = score(doc=5871,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 5871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5871)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this research, we propose 3 different approaches to measure the semantic relatedness between 2 words: (i) boost the performance of GloVe word embedding model via removing or transforming abnormal dimensions; (ii) linearly combine the information extracted from WordNet and word embeddings; and (iii) utilize word embedding and 12 linguistic information extracted from WordNet as features for Support Vector Regression. We conducted our experiments on 8 benchmark data sets, and computed Spearman correlations between the outputs of our methods and the ground truth. We report our results together with 3 state-of-the-art approaches. The experimental results show that our method can outperform state-of-the-art approaches in all the selected English benchmark data sets.
  11. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 38) [ClassicSimilarity], result of:
              0.098327816 = score(doc=38,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 38, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=38)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Erschienen in: The Semantic Web - ISWC 2020, 19th International Semantic Web Conference, Athens, Greece, November 2-6, 2020, Proceedings, Part II. Vgl.: DOI: 10.1007/978-3-030-62466-8_28.
  12. ¬Der Student aus dem Computer (2023) 0.02
    0.024106542 = product of:
      0.048213083 = sum of:
        0.048213083 = product of:
          0.09642617 = sum of:
            0.09642617 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09642617 = score(doc=1079,freq=2.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  13. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.02
    0.023176087 = product of:
      0.046352174 = sum of:
        0.046352174 = product of:
          0.09270435 = sum of:
            0.09270435 = weight(_text_:ii in 1004) [ClassicSimilarity], result of:
              0.09270435 = score(doc=1004,freq=4.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.33760133 = fieldWeight in 1004, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1004)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
  14. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.02
    0.02066275 = product of:
      0.0413255 = sum of:
        0.0413255 = product of:
          0.082651 = sum of:
            0.082651 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.082651 = score(doc=4156,freq=2.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 14:08:22
  15. Ibrahim, G.M.; Taylor, M.: Krebszellen manipulieren Neurone : Gliome (2023) 0.02
    0.02066275 = product of:
      0.0413255 = sum of:
        0.0413255 = product of:
          0.082651 = sum of:
            0.082651 = weight(_text_:22 in 1203) [ClassicSimilarity], result of:
              0.082651 = score(doc=1203,freq=2.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.46428138 = fieldWeight in 1203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1203)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2023, H.10, S.22-24
  16. Sinha, P.K.; Dutta, B.: ¬A systematic analysis of flood ontologies : a parametric approach (2020) 0.02
    0.020484962 = product of:
      0.040969923 = sum of:
        0.040969923 = product of:
          0.08193985 = sum of:
            0.08193985 = weight(_text_:ii in 5758) [ClassicSimilarity], result of:
              0.08193985 = score(doc=5758,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.29840025 = fieldWeight in 5758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5758)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The article identifies the core literature available on flood ontologies and presents a review on these ontologies from various perspectives like its purpose, type, design methodologies, ontologies (re)used, and also their focus on specific flood disaster phases. The study was conducted in two stages: i) literature identification, where the systematic literature review methodology was employed; and, ii) ontological review, where the parametric approach was applied. The study resulted in a set of fourteen papers discussing the flood ontology (FO). The ontological review revealed that most of the flood ontologies were task ontologies, formal, modular, and used web ontology language (OWL) for their representation. The most (re)used ontologies were SWEET, SSN, Time, and Space. METHONTOLOGY was the preferred design methodology, and for evaluation, application-based or data-based approaches were preferred. The majority of the ontologies were built around the response phase of the disaster. The unavailability of the full ontologies somewhat restricted the current study as the structural ontology metrics are missing. But the scientific community, the developers, of flood disaster management systems can refer to this work for their research to see what is available in the literature on flood ontology and the other major domains essential in building the FO.
  17. Baines, D.; Elliott, R.J.: Defining misinformation, disinformation and malinformation : an urgent need for clarity during the COVID-19 infodemic (2020) 0.02
    0.020484962 = product of:
      0.040969923 = sum of:
        0.040969923 = product of:
          0.08193985 = sum of:
            0.08193985 = weight(_text_:ii in 5853) [ClassicSimilarity], result of:
              0.08193985 = score(doc=5853,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.29840025 = fieldWeight in 5853, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5853)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    COVID-19 is an unprecedented global health crisis that will have immeasurable consequences for our economic and social well-being. Tedros Adhanom Ghebreyesus, the director general of the World Health Organization, stated "We're not just fighting an epidemic; we're fighting an infodemic". Currently, there is no robust scientific basis to the existing definitions of false information used in the fight against the COVID-19infodemic. The purpose of this paper is to demonstrate how the use of a novel taxonomy and related model (based upon a conceptual framework that synthesizes insights from information science, philosophy, media studies and politics) can produce new scientific definitions of mis-, dis- and malinformation. We undertake our analysis from the viewpoint of information systems research. The conceptual approach to defining mis-,dis- and malinformation can be applied to a wide range of empirical examples and, if applied properly, may prove useful in fighting the COVID-19 infodemic. In sum, our research suggests that: (i) analyzing all types of information is important in the battle against the COVID-19 infodemic; (ii) a scientific approach is required so that different methods are not used by different studies; (iii) "misinformation", as an umbrella term, can be confusing and should be dropped from use; (iv) clear, scientific definitions of information types will be needed going forward; (v) malinformation is an overlooked phenomenon involving reconfigurations of the truth.
  18. Koya, K.; Chowdhury, G.: Cultural heritage information practices and iSchools education for achieving sustainable development (2020) 0.02
    0.020484962 = product of:
      0.040969923 = sum of:
        0.040969923 = product of:
          0.08193985 = sum of:
            0.08193985 = weight(_text_:ii in 5877) [ClassicSimilarity], result of:
              0.08193985 = score(doc=5877,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.29840025 = fieldWeight in 5877, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5877)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Since 2015, the United Nations Educational, Scientific and Cultural Organization (UNESCO) began the process of inculcating culture as part of the United Nations' (UN) post-2015 Sustainable (former Millennium) Development Goals, which member countries agreed to achieve by 2030. By conducting a thematic analysis of the 25 UN commissioned reports and policy documents, this research identifies 14 broad cultural heritage information themes that need to be practiced in order to achieve cultural sustainability, of which information platforms, information sharing, information broadcast, information quality, information usage training, information access, information collection, and contribution appear to be the significant themes. An investigation of education on cultural heritage informatics and digital humanities at iSchools (www.ischools.org) using a gap analysis framework demonstrates the core information science skills required for cultural heritage education. The research demonstrates that: (i) a thematic analysis of cultural heritage policy documents can be used to explore the key themes for cultural informatics education and research that can lead to sustainable development; and (ii) cultural heritage information education should cover a series of skills that can be categorized in five key areas, viz., information, technology, leadership, application, and people and user skills.
  19. Vannini, S.; Gomez, R.; Newell, B.C.: "Mind the five" : guidelines for data privacy and security in humanitarian work with undocumented migrants and other vulnerable populations (2020) 0.02
    0.020484962 = product of:
      0.040969923 = sum of:
        0.040969923 = product of:
          0.08193985 = sum of:
            0.08193985 = weight(_text_:ii in 5947) [ClassicSimilarity], result of:
              0.08193985 = score(doc=5947,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.29840025 = fieldWeight in 5947, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5947)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The forced displacement and transnational migration of millions of people around the world is a growing phenomenon that has been met with increased surveillance and datafication by a variety of actors. Small humanitarian organizations that help irregular migrants in the United States frequently do not have the resources or expertise to fully address the implications of collecting, storing, and using data about the vulnerable populations they serve. As a result, there is a risk that their work could exacerbate the vulnerabilities of the very same migrants they are trying to help. In this study, we propose a conceptual framework for protecting privacy in the context of humanitarian information activities (HIA) with irregular migrants. We draw from a review of the academic literature as well as interviews with individuals affiliated with several US-based humanitarian organizations, higher education institutions, and nonprofit organizations that provide support to undocumented migrants. We discuss 3 primary issues: (i) HIA present both technological and human risks; (ii) the expectation of privacy self-management by vulnerable populations is problematic; and (iii) there is a need for robust, actionable, privacy-related guidelines for HIA. We suggest 5 recommendations to strengthen the privacy protection offered to undocumented migrants and other vulnerable populations.
  20. Rieder, B.: Engines of order : a mechanology of algorithmic techniques (2020) 0.02
    0.020484962 = product of:
      0.040969923 = sum of:
        0.040969923 = product of:
          0.08193985 = sum of:
            0.08193985 = weight(_text_:ii in 315) [ClassicSimilarity], result of:
              0.08193985 = score(doc=315,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.29840025 = fieldWeight in 315, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=315)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Part I -- 1. Engines of Order -- 2. Rethinking Software -- 3. Software-Making and Algorithmic Techniques -- Part II -- 4. From Universal Classification to a Postcoordinated Universe -- 5. From Frequencies to Vectors -- 6. Interested Learning -- 7. Calculating Networks: From Sociometry to PageRank -- Conclusion: Toward Technical Culture Erscheint als Open Access bei De Gruyter.

Languages

  • e 83
  • d 33

Types

  • a 108
  • el 26
  • m 4
  • p 2
  • x 1
  • More… Less…