Search (240 results, page 1 of 12)

  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.24
    0.244769 = product of:
      0.489538 = sum of:
        0.0489538 = product of:
          0.1468614 = sum of:
            0.1468614 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.1468614 = score(doc=862,freq=2.0), product of:
                0.26131085 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030822188 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.1468614 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1468614 = score(doc=862,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.1468614 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1468614 = score(doc=862,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.1468614 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1468614 = score(doc=862,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(4/8)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.20
    0.20397419 = product of:
      0.40794837 = sum of:
        0.04079484 = product of:
          0.12238451 = sum of:
            0.12238451 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.12238451 = score(doc=1000,freq=2.0), product of:
                0.26131085 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030822188 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.12238451 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12238451 = score(doc=1000,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12238451 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12238451 = score(doc=1000,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12238451 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12238451 = score(doc=1000,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(4/8)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Wilke, M.; Pauen, M.; Ayan, S.: »Wir überschätzen die Rolle des Bewusstseins systematisch« : Leib-Seele-Problem (2022) 0.07
    0.06895139 = product of:
      0.5516111 = sum of:
        0.5516111 = sum of:
          0.26995498 = weight(_text_:leib in 490) [ClassicSimilarity], result of:
            0.26995498 = score(doc=490,freq=6.0), product of:
              0.24922726 = queryWeight, product of:
                8.085969 = idf(docFreq=36, maxDocs=44218)
                0.030822188 = queryNorm
              1.0831679 = fieldWeight in 490, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                8.085969 = idf(docFreq=36, maxDocs=44218)
                0.0546875 = fieldNorm(doc=490)
          0.22092216 = weight(_text_:seele in 490) [ClassicSimilarity], result of:
            0.22092216 = score(doc=490,freq=6.0), product of:
              0.22546001 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.030822188 = queryNorm
              0.97987294 = fieldWeight in 490, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.0546875 = fieldNorm(doc=490)
          0.06073395 = weight(_text_:problem in 490) [ClassicSimilarity], result of:
            0.06073395 = score(doc=490,freq=4.0), product of:
              0.13082431 = queryWeight, product of:
                4.244485 = idf(docFreq=1723, maxDocs=44218)
                0.030822188 = queryNorm
              0.46424055 = fieldWeight in 490, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.244485 = idf(docFreq=1723, maxDocs=44218)
                0.0546875 = fieldNorm(doc=490)
      0.125 = coord(1/8)
    
    Abstract
    In den vergangenen 20 Jahren haben Wissenschaftlerinnen und Wissenschaftler vieles über das Bewusstsein gelernt. Einer der größten Fortschritte: Das Bewusstsein ist inzwischen ein etablierter Gegenstand der empirischen Forschung, sagen die Neurowissenschaftlerin Melanie Wilke und der Philosoph Michael Pauen. Im Interview erklären Sie, vor welchen Hürden Forscherinnen und Forscher immer noch stehen und wie sie die »harte Nuss« des Leib-Seele-Problems endlich knacken wollen. Ein Gespräch über Geist, Gehirn und ihre Beziehung zueinander mit der Neurowissenschaftlerin Melanie Wilke und dem Philosophen Michael Pauen.
    Source
    https://www.spektrum.de/news/leib-seele-problem-was-wissen-wir-ueber-das-bewusstsein/1974235?utm_source=pocket-newtab-global-de-DE
  4. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.01
    0.011954496 = product of:
      0.047817983 = sum of:
        0.04079484 = product of:
          0.12238451 = sum of:
            0.12238451 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.12238451 = score(doc=5669,freq=2.0), product of:
                0.26131085 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030822188 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.007023146 = product of:
          0.021069437 = sum of:
            0.021069437 = weight(_text_:29 in 5669) [ClassicSimilarity], result of:
              0.021069437 = score(doc=5669,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19432661 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  5. Geras, A.; Siudem, G.; Gagolewski, M.: Should we introduce a dislike button for academic articles? (2020) 0.01
    0.0064261304 = product of:
      0.025704522 = sum of:
        0.017352559 = product of:
          0.052057672 = sum of:
            0.052057672 = weight(_text_:problem in 5620) [ClassicSimilarity], result of:
              0.052057672 = score(doc=5620,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.39792046 = fieldWeight in 5620, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5620)
          0.33333334 = coord(1/3)
        0.008351962 = product of:
          0.025055885 = sum of:
            0.025055885 = weight(_text_:22 in 5620) [ClassicSimilarity], result of:
              0.025055885 = score(doc=5620,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23214069 = fieldWeight in 5620, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5620)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    There is a mutual resemblance between the behavior of users of the Stack Exchange and the dynamics of the citations accumulation process in the scientific community, which enabled us to tackle the outwardly intractable problem of assessing the impact of introducing "negative" citations. Although the most frequent reason to cite an article is to highlight the connection between the 2 publications, researchers sometimes mention an earlier work to cast a negative light. While computing citation-based scores, for instance, the h-index, information about the reason why an article was mentioned is neglected. Therefore, it can be questioned whether these indices describe scientific achievements accurately. In this article we shed insight into the problem of "negative" citations, analyzing data from Stack Exchange and, to draw more universal conclusions, we derive an approximation of citations scores. Here we show that the quantified influence of introducing negative citations is of lesser importance and that they could be used as an indicator of where the attention of the scientific community is allocated.
    Date
    6. 1.2020 18:10:22
  6. Irrgang, B.: Roboterbewusstsein, automatisiertes Entscheiden und Transhumanismus : Anthropomorphisierungen von KI im Licht evolutionär-phänomenologischer Leib-Anthropologie (2020) 0.00
    0.0046386477 = product of:
      0.03710918 = sum of:
        0.03710918 = product of:
          0.111327544 = sum of:
            0.111327544 = weight(_text_:leib in 5999) [ClassicSimilarity], result of:
              0.111327544 = score(doc=5999,freq=2.0), product of:
                0.24922726 = queryWeight, product of:
                  8.085969 = idf(docFreq=36, maxDocs=44218)
                  0.030822188 = queryNorm
                0.4466909 = fieldWeight in 5999, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.085969 = idf(docFreq=36, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5999)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
  7. Nikiforova, A.A.: ¬The systems approach (2022) 0.00
    0.0043120594 = product of:
      0.017248238 = sum of:
        0.010225092 = product of:
          0.030675275 = sum of:
            0.030675275 = weight(_text_:problem in 1108) [ClassicSimilarity], result of:
              0.030675275 = score(doc=1108,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23447686 = fieldWeight in 1108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1108)
          0.33333334 = coord(1/3)
        0.007023146 = product of:
          0.021069437 = sum of:
            0.021069437 = weight(_text_:29 in 1108) [ClassicSimilarity], result of:
              0.021069437 = score(doc=1108,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19432661 = fieldWeight in 1108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1108)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    The review attempts to compare different points of view on the essence of the systems approach, describe the terminological confusion around it and analyse the numerous definitions of system. It is shown that the vagueness and ambiguity of the concept of the systems approach is manifested in the use of a number of terms which are similar in meaning and close in sound to it. It is proposed to divide the existing definitions of system into descriptive and formal ones. The concepts included in the descriptive definitions, as well as the numerous synonymous terms denoting them, are divided into five conceptual-terminological groups that differ in their content and logical meaning. The meanings of such concepts as minimal constituent parts, emergence, environment, boundaries, purpose, functions of system and systems hierarchy are revealed. Some uses of the concept in knowledge organization are mentioned. The problem of systems classification is touched upon. Separate sections are devoted to the highlights of the history of the systems approach, its criticism and the significance. Particular attention is paid to criticism of the mathematization of the systems approach. Possible reasons for the decline in interest in the systems approach are identified. It is concluded that the systems approach helps to find new ways to solve scientific and practical problems.
    Date
    20.11.2023 13:36:29
  8. Yang, F.; Zhang, X.: Focal fields in literature on the information divide : the USA, China, UK and India (2020) 0.00
    0.004296265 = product of:
      0.01718506 = sum of:
        0.010225092 = product of:
          0.030675275 = sum of:
            0.030675275 = weight(_text_:problem in 5835) [ClassicSimilarity], result of:
              0.030675275 = score(doc=5835,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23447686 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5835)
          0.33333334 = coord(1/3)
        0.0069599687 = product of:
          0.020879906 = sum of:
            0.020879906 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.020879906 = score(doc=5835,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19345059 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5835)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Purpose The purpose of this paper is to identify key countries and their focal research fields on the information divide. Design/methodology/approach Literature was retrieved to identify key countries and their primary focus. The literature research method was adopted to identify aspects of the primary focus in each key country. Findings The key countries with literature on the information divide are the USA, China, the UK and India. The problem of health is prominent in the USA, and solutions include providing information, distinguishing users' profiles and improving eHealth literacy. Economic and political factors led to the urban-rural information divide in China, and policy is the most powerful solution. Under the influence of humanism, research on the information divide in the UK focuses on all age groups, and solutions differ according to age. Deep-rooted patriarchal concepts and traditional marriage customs make the gender information divide prominent in India, and increasing women's information consciousness is a feasible way to reduce this divide. Originality/value This paper is an extensive review study on the information divide, which clarifies the key countries and their focal fields in research on this topic. More important, the paper innovatively analyzes and summarizes existing literature from a country perspective.
    Date
    13. 2.2020 18:22:13
  9. Zhang, L.; Lu, W.; Yang, J.: LAGOS-AND : a large gold standard dataset for scholarly author name disambiguation (2023) 0.00
    0.004296265 = product of:
      0.01718506 = sum of:
        0.010225092 = product of:
          0.030675275 = sum of:
            0.030675275 = weight(_text_:problem in 883) [ClassicSimilarity], result of:
              0.030675275 = score(doc=883,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23447686 = fieldWeight in 883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=883)
          0.33333334 = coord(1/3)
        0.0069599687 = product of:
          0.020879906 = sum of:
            0.020879906 = weight(_text_:22 in 883) [ClassicSimilarity], result of:
              0.020879906 = score(doc=883,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19345059 = fieldWeight in 883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=883)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    In this article, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS-AND, two large, gold-standard sub-datasets for author name disambiguation (AND), of which LAGOS-AND-BLOCK is created for clustering-based AND research and LAGOS-AND-PAIRWISE is created for classification-based AND research. Our LAGOS-AND datasets are substantially different from the existing ones. The initial versions of the datasets (v1.0, released in February 2021) include 7.5 M citations authored by 798 K unique authors (LAGOS-AND-BLOCK) and close to 1 M instances (LAGOS-AND-PAIRWISE). And both datasets show close similarities to the whole Microsoft Academic Graph (MAG) across validations of six facets. In building the datasets, we reveal the variation degrees of last names in three literature databases, PubMed, MAG, and Semantic Scholar, by comparing author names hosted to the authors' official last names shown on the ORCID pages. Furthermore, we evaluate several baseline disambiguation methods as well as the MAG's author IDs system on our datasets, and the evaluation helps identify several interesting findings. We hope the datasets and findings will bring new insights for future studies. The code and datasets are publicly available.
    Date
    22. 1.2023 18:40:36
  10. Bärnreuther, K.: Informationskompetenz-Vermittlung für Schulklassen mit Wikipedia und dem Framework Informationskompetenz in der Hochschulbildung (2021) 0.00
    0.0041949344 = product of:
      0.033559475 = sum of:
        0.033559475 = product of:
          0.05033921 = sum of:
            0.025283325 = weight(_text_:29 in 299) [ClassicSimilarity], result of:
              0.025283325 = score(doc=299,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23319192 = fieldWeight in 299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=299)
            0.025055885 = weight(_text_:22 in 299) [ClassicSimilarity], result of:
              0.025055885 = score(doc=299,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23214069 = fieldWeight in 299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=299)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Date
    30. 6.2021 16:29:52
    Source
    o-bib: Das offene Bibliotheksjournal. 8(2021) Nr.2, S.1-22
  11. Hertzum, M.: Information seeking by experimentation : trying something out to discover what happens (2023) 0.00
    0.0041949344 = product of:
      0.033559475 = sum of:
        0.033559475 = product of:
          0.05033921 = sum of:
            0.025283325 = weight(_text_:29 in 915) [ClassicSimilarity], result of:
              0.025283325 = score(doc=915,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23319192 = fieldWeight in 915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=915)
            0.025055885 = weight(_text_:22 in 915) [ClassicSimilarity], result of:
              0.025055885 = score(doc=915,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23214069 = fieldWeight in 915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=915)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Date
    21. 3.2023 19:22:29
  12. Wolfangel, E.: DeepMind will Problem der Proteinfaltung gelöst haben (2020) 0.00
    0.0036151158 = product of:
      0.028920926 = sum of:
        0.028920926 = product of:
          0.08676278 = sum of:
            0.08676278 = weight(_text_:problem in 5635) [ClassicSimilarity], result of:
              0.08676278 = score(doc=5635,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.66320074 = fieldWeight in 5635, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5635)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Source
    https://www.spektrum.de/news/deepmind-will-problem-der-proteinfaltung-geloest-haben/1802324?utm_source=pocket-newtab-global-de-DE
  13. Thelwall, M.; Thelwall, S.: ¬A thematic analysis of highly retweeted early COVID-19 tweets : consensus, information, dissent and lockdown life (2020) 0.00
    0.0034957787 = product of:
      0.02796623 = sum of:
        0.02796623 = product of:
          0.041949343 = sum of:
            0.021069437 = weight(_text_:29 in 178) [ClassicSimilarity], result of:
              0.021069437 = score(doc=178,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19432661 = fieldWeight in 178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=178)
            0.020879906 = weight(_text_:22 in 178) [ClassicSimilarity], result of:
              0.020879906 = score(doc=178,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19345059 = fieldWeight in 178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=178)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Abstract
    Purpose Public attitudes towards COVID-19 and social distancing are critical in reducing its spread. It is therefore important to understand public reactions and information dissemination in all major forms, including on social media. This article investigates important issues reflected on Twitter in the early stages of the public reaction to COVID-19. Design/methodology/approach A thematic analysis of the most retweeted English-language tweets mentioning COVID-19 during March 10-29, 2020. Findings The main themes identified for the 87 qualifying tweets accounting for 14 million retweets were: lockdown life; attitude towards social restrictions; politics; safety messages; people with COVID-19; support for key workers; work; and COVID-19 facts/news. Research limitations/implications Twitter played many positive roles, mainly through unofficial tweets. Users shared social distancing information, helped build support for social distancing, criticised government responses, expressed support for key workers and helped each other cope with social isolation. A few popular tweets not supporting social distancing show that government messages sometimes failed. Practical implications Public health campaigns in future may consider encouraging grass roots social web activity to support campaign goals. At a methodological level, analysing retweet counts emphasised politics and ignored practical implementation issues. Originality/value This is the first qualitative analysis of general COVID-19-related retweeting.
    Date
    20. 1.2015 18:30:22
  14. Barité, M.; Parentelli, V.; Rodríguez Casaballe, N.; Suárez, M.V.: Interdisciplinarity and postgraduate teaching of knowledge organization (KO) : elements for a necessary dialogue (2023) 0.00
    0.0034957787 = product of:
      0.02796623 = sum of:
        0.02796623 = product of:
          0.041949343 = sum of:
            0.021069437 = weight(_text_:29 in 1125) [ClassicSimilarity], result of:
              0.021069437 = score(doc=1125,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19432661 = fieldWeight in 1125, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1125)
            0.020879906 = weight(_text_:22 in 1125) [ClassicSimilarity], result of:
              0.020879906 = score(doc=1125,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19345059 = fieldWeight in 1125, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1125)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Abstract
    Interdisciplinarity implies the previous existence of disciplinary fields and not their dissolution. As a general objective, we propose to establish an initial approach to the emphasis given to interdisciplinarity in the teaching of KO, through the teaching staff responsible for postgraduate courses focused on -or related to the KO, in Ibero-American universities. For conducting the research, the framework and distribution of a survey addressed to teachers is proposed, based on four lines of action: 1. The way teachers manage the concept of interdisciplinarity. 2. The place that teachers give to interdisciplinarity in KO. 3. Assessment of interdisciplinary content that teachers incorporate into their postgraduate courses. 4. Set of teaching strategies and resources used by teachers to include interdisciplinarity in the teaching of KO. The study analyzed 22 responses. Preliminary results show that KO teachers recognize the influence of other disciplines in concepts, theories, methods, and applications, but no consensus has been reached regarding which disciplines and authors are the ones who build interdisciplinary bridges. Among other conclusions, the study strongly suggests that environmental and social tensions are reflected in subject representation, especially in the construction of friendly knowl­edge organization systems with interdisciplinary visions, and in the expressions through which information is sought.
    Date
    20.11.2023 17:29:13
  15. Vaas, R.: Wo die Wissenschaft endet (2020) 0.00
    0.0030675277 = product of:
      0.024540221 = sum of:
        0.024540221 = product of:
          0.07362066 = sum of:
            0.07362066 = weight(_text_:problem in 5784) [ClassicSimilarity], result of:
              0.07362066 = score(doc=5784,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5627445 = fieldWeight in 5784, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5784)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Series
    Titelthema: Wissenschaft löst Ihr Problem
  16. Vaas, R.: Wo endet die Wissenschaft? (2020) 0.00
    0.0030675277 = product of:
      0.024540221 = sum of:
        0.024540221 = product of:
          0.07362066 = sum of:
            0.07362066 = weight(_text_:problem in 5786) [ClassicSimilarity], result of:
              0.07362066 = score(doc=5786,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5627445 = fieldWeight in 5786, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5786)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Series
    Titelthema: Wissenschaft löst Ihr Problem
  17. Hornung, P.: Im Kampf gegen Fake-Verlage (2021) 0.00
    0.0030675277 = product of:
      0.024540221 = sum of:
        0.024540221 = product of:
          0.07362066 = sum of:
            0.07362066 = weight(_text_:problem in 134) [ClassicSimilarity], result of:
              0.07362066 = score(doc=134,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5627445 = fieldWeight in 134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.09375 = fieldNorm(doc=134)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Deutsche Hochschulen gehen einer Umfrage des NDR zufolge zunehmend gegen das Problem fragwürdiger Online-Fachzeitschriften vor. Diese schaden ihrer Ansicht nach dem Ansehen der Wissenschaft.
  18. Metz, C.: ¬The new chatbots could change the world : can you trust them? (2022) 0.00
    0.0030675277 = product of:
      0.024540221 = sum of:
        0.024540221 = product of:
          0.07362066 = sum of:
            0.07362066 = weight(_text_:problem in 854) [ClassicSimilarity], result of:
              0.07362066 = score(doc=854,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5627445 = fieldWeight in 854, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.09375 = fieldNorm(doc=854)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Siri, Google Search, online marketing and your child's homework will never be the same. Then there's the misinformation problem.
  19. Barthel, J.; Ciesielski, R.: Regeln zu ChatGPT an Unis oft unklar : KI in der Bildung (2023) 0.00
    0.0030411114 = product of:
      0.024328891 = sum of:
        0.024328891 = product of:
          0.07298667 = sum of:
            0.07298667 = weight(_text_:29 in 925) [ClassicSimilarity], result of:
              0.07298667 = score(doc=925,freq=6.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.6731671 = fieldWeight in 925, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=925)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    29. 3.2023 13:23:26
    29. 3.2023 13:29:19
  20. Pooja, K.M.; Mondal, S.; Chandra, J.: ¬A graph combination with edge pruning-based approach for author name disambiguation (2020) 0.00
    0.0028580003 = product of:
      0.022864003 = sum of:
        0.022864003 = product of:
          0.068592004 = sum of:
            0.068592004 = weight(_text_:problem in 59) [ClassicSimilarity], result of:
              0.068592004 = score(doc=59,freq=10.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.52430624 = fieldWeight in 59, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=59)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Author name disambiguation (AND) is a challenging problem due to several issues such as missing key identifiers, same name corresponding to multiple authors, along with inconsistent representation. Several techniques have been proposed but maintaining consistent accuracy levels over all data sets is still a major challenge. We identify two major issues associated with the AND problem. First, the namesake problem in which two or more authors with the same name publishes in a similar domain. Second, the diverse topic problem in which one author publishes in diverse topical domains with a different set of coauthors. In this work, we initially propose a method named ATGEP for AND that addresses the namesake issue. We evaluate the performance of ATGEP using various ambiguous name references collected from the Arnetminer Citation (AC) and Web of Science (WoS) data set. We empirically show that the two aforementioned problems are crucial to address the AND problem that are difficult to handle using state-of-the-art techniques. To handle the diverse topic issue, we extend ATGEP to a new variant named ATGEP-web that considers external web information of the authors. Experiments show that with enough information available from external web sources ATGEP-web can significantly improve the results further compared with ATGEP.

Languages

  • e 158
  • d 81

Types

  • a 218
  • el 49
  • m 9
  • p 4
  • x 1
  • More… Less…