Search (114 results, page 1 of 6)

  • × year_i:[2020 TO 2030}
  1. Geras, A.; Siudem, G.; Gagolewski, M.: Should we introduce a dislike button for academic articles? (2020) 0.04
    0.042908456 = product of:
      0.08581691 = sum of:
        0.08581691 = product of:
          0.12872536 = sum of:
            0.08928455 = weight(_text_:universal in 5620) [ClassicSimilarity], result of:
              0.08928455 = score(doc=5620,freq=2.0), product of:
                0.25562882 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.04851763 = queryNorm
                0.3492742 = fieldWeight in 5620, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5620)
            0.039440814 = weight(_text_:22 in 5620) [ClassicSimilarity], result of:
              0.039440814 = score(doc=5620,freq=2.0), product of:
                0.16990048 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04851763 = queryNorm
                0.23214069 = fieldWeight in 5620, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5620)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    There is a mutual resemblance between the behavior of users of the Stack Exchange and the dynamics of the citations accumulation process in the scientific community, which enabled us to tackle the outwardly intractable problem of assessing the impact of introducing "negative" citations. Although the most frequent reason to cite an article is to highlight the connection between the 2 publications, researchers sometimes mention an earlier work to cast a negative light. While computing citation-based scores, for instance, the h-index, information about the reason why an article was mentioned is neglected. Therefore, it can be questioned whether these indices describe scientific achievements accurately. In this article we shed insight into the problem of "negative" citations, analyzing data from Stack Exchange and, to draw more universal conclusions, we derive an approximation of citations scores. Here we show that the quantified influence of introducing negative citations is of lesser importance and that they could be used as an indicator of where the attention of the scientific community is allocated.
    Date
    6. 1.2020 18:10:22
  2. Bullard, J.; Dierking, A.; Grundner, A.: Centring LGBT2QIA+ subjects in knowledge organization systems (2020) 0.04
    0.042908456 = product of:
      0.08581691 = sum of:
        0.08581691 = product of:
          0.12872536 = sum of:
            0.08928455 = weight(_text_:universal in 5996) [ClassicSimilarity], result of:
              0.08928455 = score(doc=5996,freq=2.0), product of:
                0.25562882 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.04851763 = queryNorm
                0.3492742 = fieldWeight in 5996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5996)
            0.039440814 = weight(_text_:22 in 5996) [ClassicSimilarity], result of:
              0.039440814 = score(doc=5996,freq=2.0), product of:
                0.16990048 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04851763 = queryNorm
                0.23214069 = fieldWeight in 5996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5996)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This paper contains a report of two interdependent knowledge organization (KO) projects for an LGBT2QIA+ library. The authors, in the context of volunteer library work for an independent library, redesigned the classification system and subject cataloguing guidelines to centre LGBT2QIA+ subjects. We discuss the priorities of creating and maintaining knowledge organization systems for a historically marginalized community and address the challenge that queer subjectivity poses to the goals of KO. The classification system features a focus on identity and physically reorganizes the library space in a way that accounts for the multiple and overlapping labels that constitute the currently articulated boundaries of this community. The subject heading system focuses on making visible topics and elements of identity made invisible by universal systems and by the newly implemented classification system. We discuss how this project may inform KO for other marginalized subjects, particularly through process and documentation that prioritizes transparency and the acceptance of an unfinished endpoint for queer KO.
    Date
    6.10.2020 21:22:33
  3. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.038529426 = product of:
      0.07705885 = sum of:
        0.07705885 = product of:
          0.23117656 = sum of:
            0.23117656 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.23117656 = score(doc=862,freq=2.0), product of:
                0.411333 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04851763 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  4. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.03210786 = product of:
      0.06421572 = sum of:
        0.06421572 = product of:
          0.19264714 = sum of:
            0.19264714 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.19264714 = score(doc=5669,freq=2.0), product of:
                0.411333 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04851763 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  5. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.03210786 = product of:
      0.06421572 = sum of:
        0.06421572 = product of:
          0.19264714 = sum of:
            0.19264714 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.19264714 = score(doc=1000,freq=2.0), product of:
                0.411333 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04851763 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  6. Lima, G.A. de; Castro, I.R.: Uso da classificacao decimal universal para a recuperacao da informacao em ambientes digitas : uma revisao sistematica da literatura (2021) 0.03
    0.02772866 = product of:
      0.05545732 = sum of:
        0.05545732 = product of:
          0.16637196 = sum of:
            0.16637196 = weight(_text_:universal in 760) [ClassicSimilarity], result of:
              0.16637196 = score(doc=760,freq=10.0), product of:
                0.25562882 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.04851763 = queryNorm
                0.6508341 = fieldWeight in 760, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=760)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge Organization Systems, even traditional ones, such as the Universal Decimal Classification, have been studied to improve the retrieval of information online, although the potential of using knowledge structures in the user interface has not yet been widespread. Objective: This study presents a mapping of scientific production on information retrieval methodologies, which make use of the Universal Decimal Classification. Methodology: Systematic Literature Review, conducted in two stages, with a selection of 44 publications, resulting in the time interval from 1964 to 2017, whose categories analyzed were: most productive authors, languages of publications, types of document, year of publication, most cited work, major impact journal, and thematic categories covered in the publications. Results: A total of nine more productive authors and co-authors were found; predominance of the English language (42 publications); works published in the format of journal articles (33); and highlight to the year 2007 (eight publications). In addition, it was identified that the most cited work was by Mcilwaine (1997), with 61 citations, and the journal Extensions & Corrections to the UDC was the one with the largest number of publications, in addition to the incidence of the theme Universal Automation linked to a thesaurus for information retrieval, present in 19 works. Conclusions: Shortage of studies that explore the potential of the Decimal Classification, especially in Brazilian literature, which highlights the need for further study on the topic, involving research at the national and international levels.
    Footnote
    Englischer Titel: Use of the Universal Decimal Classification for the recoery of information in digital environments: a systematic review of literature.
  7. Müller, O.L,: Pazifismus : eine Verteidigung (2022) 0.02
    0.019841012 = product of:
      0.039682023 = sum of:
        0.039682023 = product of:
          0.11904607 = sum of:
            0.11904607 = weight(_text_:universal in 875) [ClassicSimilarity], result of:
              0.11904607 = score(doc=875,freq=2.0), product of:
                0.25562882 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.04851763 = queryNorm
                0.46569893 = fieldWeight in 875, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.0625 = fieldNorm(doc=875)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Series
    Was bedeutet das alles?)(Reclams Universal-Bibliothek; 14354
  8. Szostak, R.: Basic Concepts Classification (BCC) (2020) 0.02
    0.017537143 = product of:
      0.035074286 = sum of:
        0.035074286 = product of:
          0.10522286 = sum of:
            0.10522286 = weight(_text_:universal in 5883) [ClassicSimilarity], result of:
              0.10522286 = score(doc=5883,freq=4.0), product of:
                0.25562882 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.04851763 = queryNorm
                0.4116236 = fieldWeight in 5883, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5883)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The Basics Concept Classification (BCC) is a "universal" scheme: it attempts to encompass all areas of human understanding. Whereas most universal schemes are organized around scholarly disciplines, the BCC is instead organized around phenomena (things), the relationships that exist among phenomena, and the properties that phenomena and relators may possess. This structure allows the BCC to apply facet analysis without requiring the use of "facet indicators." The main motivation for the BCC was a recognition that existing classifications that are organized around disciplines serve interdisciplinary scholarship poorly. Complex concepts that might be understood quite differently across groups and individuals can generally be broken into basic concepts for which there is enough shared understanding for the purposes of classification. Documents, ideas, and objects are classified synthetically by combining entries from the schedules of phenomena, relators, and properties. The inclusion of separate schedules of-generally verb-like-relators is one of the most unusual aspects of the BCC. This (and the schedules of properties that serve as adjectives or adverbs) allows the production of sentence-like subject strings. Documents can then be classified in terms of the main arguments made in the document. BCC provides very precise descriptors of documents by combining phenomena, relators, and properties synthetically. The terminology employed in the BCC reduces terminological ambiguity. The BCC is still being developed and it needs to be fleshed out in certain respects. Yet it also needs to be applied; only in application can the feasibility and desirability of the classification be adequately assessed.
  9. Nagel, T.: Was bedeutet das alles? : Eine ganz kurze Einführung in die Philosophie (2020) 0.02
    0.017360885 = product of:
      0.03472177 = sum of:
        0.03472177 = product of:
          0.10416531 = sum of:
            0.10416531 = weight(_text_:universal in 204) [ClassicSimilarity], result of:
              0.10416531 = score(doc=204,freq=2.0), product of:
                0.25562882 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.04851763 = queryNorm
                0.40748656 = fieldWeight in 204, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=204)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Series
    Reclams Universal-Bibliothek ; Nr. 19000) (Was bedeutet das alles?
  10. ¬Der Student aus dem Computer (2023) 0.02
    0.015338095 = product of:
      0.03067619 = sum of:
        0.03067619 = product of:
          0.092028566 = sum of:
            0.092028566 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.092028566 = score(doc=1079,freq=2.0), product of:
                0.16990048 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04851763 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  11. Kragelj, M.; Borstnar, M.K.: Automatic classification of older electronic texts into the Universal Decimal Classification-UDC (2021) 0.01
    0.014029714 = product of:
      0.028059429 = sum of:
        0.028059429 = product of:
          0.084178284 = sum of:
            0.084178284 = weight(_text_:universal in 175) [ClassicSimilarity], result of:
              0.084178284 = score(doc=175,freq=4.0), product of:
                0.25562882 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.04851763 = queryNorm
                0.32929888 = fieldWeight in 175, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.03125 = fieldNorm(doc=175)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this study is to develop a model for automated classification of old digitised texts to the Universal Decimal Classification (UDC), using machine-learning methods. Design/methodology/approach The general research approach is inherent to design science research, in which the problem of UDC assignment of the old, digitised texts is addressed by developing a machine-learning classification model. A corpus of 70,000 scholarly texts, fully bibliographically processed by librarians, was used to train and test the model, which was used for classification of old texts on a corpus of 200,000 items. Human experts evaluated the performance of the model. Findings Results suggest that machine-learning models can correctly assign the UDC at some level for almost any scholarly text. Furthermore, the model can be recommended for the UDC assignment of older texts. Ten librarians corroborated this on 150 randomly selected texts. Research limitations/implications The main limitations of this study were unavailability of labelled older texts and the limited availability of librarians. Practical implications The classification model can provide a recommendation to the librarians during their classification work; furthermore, it can be implemented as an add-on to full-text search in the library databases. Social implications The proposed methodology supports librarians by recommending UDC classifiers, thus saving time in their daily work. By automatically classifying older texts, digital libraries can provide a better user experience by enabling structured searches. These contribute to making knowledge more widely available and useable. Originality/value These findings contribute to the field of automated classification of bibliographical information with the usage of full texts, especially in cases in which the texts are old, unstructured and in which archaic language and vocabulary are used.
  12. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.01
    0.013146939 = product of:
      0.026293878 = sum of:
        0.026293878 = product of:
          0.07888163 = sum of:
            0.07888163 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.07888163 = score(doc=4156,freq=2.0), product of:
                0.16990048 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04851763 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 14:08:22
  13. Ibrahim, G.M.; Taylor, M.: Krebszellen manipulieren Neurone : Gliome (2023) 0.01
    0.013146939 = product of:
      0.026293878 = sum of:
        0.026293878 = product of:
          0.07888163 = sum of:
            0.07888163 = weight(_text_:22 in 1203) [ClassicSimilarity], result of:
              0.07888163 = score(doc=1203,freq=2.0), product of:
                0.16990048 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04851763 = queryNorm
                0.46428138 = fieldWeight in 1203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1203)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2023, H.10, S.22-24
  14. Koster, L.: Persistent identifiers for heritage objects (2020) 0.01
    0.012400633 = product of:
      0.024801265 = sum of:
        0.024801265 = product of:
          0.07440379 = sum of:
            0.07440379 = weight(_text_:universal in 5718) [ClassicSimilarity], result of:
              0.07440379 = score(doc=5718,freq=2.0), product of:
                0.25562882 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.04851763 = queryNorm
                0.29106182 = fieldWeight in 5718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5718)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Persistent identifiers (PID's) are essential for getting access and referring to library, archive and museum (LAM) collection objects in a sustainable and unambiguous way, both internally and externally. Heritage institutions need a universal policy for the use of PID's in order to have an efficient digital infrastructure at their disposal and to achieve optimal interoperability, leading to open data, open collections and efficient resource management. Here the discussion is limited to PID's that institutions can assign to objects they own or administer themselves. PID's for people, subjects etc. can be used by heritage institutions, but are generally managed by other parties. The first part of this article consists of a general theoretical description of persistent identifiers. First of all, I discuss the questions of what persistent identifiers are and what they are not, and what is needed to administer and use them. The most commonly used existing PID systems are briefly characterized. Then I discuss the types of objects PID's can be assigned to. This section concludes with an overview of the requirements that apply if PIDs should also be used for linked data. The second part examines current infrastructural practices, and existing PID systems and their advantages and shortcomings. Based on these practical issues and the pros and cons of existing PID systems a list of requirements for PID systems is presented which is used to address a number of practical considerations. This section concludes with a number of recommendations.
  15. Wlodarczyk, B.: KABA Subject Headings and the National Library of Poland Descriptors in light of Wojciech Wrzosek's theory of historiographical metaphors and different historiographical traditions (2020) 0.01
    0.012400633 = product of:
      0.024801265 = sum of:
        0.024801265 = product of:
          0.07440379 = sum of:
            0.07440379 = weight(_text_:universal in 5733) [ClassicSimilarity], result of:
              0.07440379 = score(doc=5733,freq=2.0), product of:
                0.25562882 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.04851763 = queryNorm
                0.29106182 = fieldWeight in 5733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5733)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The aims of this article are, first, to provide a necessary background to investigate the discipline of history from the knowledge organization (KO) perspective, and econdly, to present, on selected examples, a way of analyzing knowledge organization systems (KOSs) from the point of view of the theory of history. The study includes a literature review and epistemological analysis. It provides a preliminary analysis of history in two selected universal Polish KOSs: KABA subject headings and the National Library of Poland Descriptors. The research is restricted to the high-level concept of historiographical metaphors coined by Wojciech Wrzosek and how they can be utilized in analyzing KOSs. The analysis of the structure of the KOSs and indexing practices of selected history books is performed. A particular emphasis is placed upon the requirements of classical and non-classical historiography in the context of KO. Although the knowledge about historiographical metaphors given by Wrzosek can be helpful for the analysis and improvement of KOSs, it seems that their broad character can provide the creators only with some general guidelines. Historical research is multidimensional, which is why the general remarks presented in this article need to be supplemented with in-depth theoretical and empirical analyses of historiography.
  16. Rieder, B.: Engines of order : a mechanology of algorithmic techniques (2020) 0.01
    0.012400633 = product of:
      0.024801265 = sum of:
        0.024801265 = product of:
          0.07440379 = sum of:
            0.07440379 = weight(_text_:universal in 315) [ClassicSimilarity], result of:
              0.07440379 = score(doc=315,freq=2.0), product of:
                0.25562882 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.04851763 = queryNorm
                0.29106182 = fieldWeight in 315, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=315)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Part I -- 1. Engines of Order -- 2. Rethinking Software -- 3. Software-Making and Algorithmic Techniques -- Part II -- 4. From Universal Classification to a Postcoordinated Universe -- 5. From Frequencies to Vectors -- 6. Interested Learning -- 7. Calculating Networks: From Sociometry to PageRank -- Conclusion: Toward Technical Culture Erscheint als Open Access bei De Gruyter.
  17. Moreira dos Santos Macula, B.C.: ¬The Universal Decimal Classification in the organization of knowledge : representing the concept of ethics (2023) 0.01
    0.012400633 = product of:
      0.024801265 = sum of:
        0.024801265 = product of:
          0.07440379 = sum of:
            0.07440379 = weight(_text_:universal in 1128) [ClassicSimilarity], result of:
              0.07440379 = score(doc=1128,freq=2.0), product of:
                0.25562882 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.04851763 = queryNorm
                0.29106182 = fieldWeight in 1128, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1128)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  18. Koch, C.: Was ist Bewusstsein? (2020) 0.01
    0.010955783 = product of:
      0.021911565 = sum of:
        0.021911565 = product of:
          0.06573469 = sum of:
            0.06573469 = weight(_text_:22 in 5723) [ClassicSimilarity], result of:
              0.06573469 = score(doc=5723,freq=2.0), product of:
                0.16990048 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04851763 = queryNorm
                0.38690117 = fieldWeight in 5723, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5723)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    17. 1.2020 22:15:11
  19. Wagner, E.: Über Impfstoffe zur digitalen Identität? (2020) 0.01
    0.010955783 = product of:
      0.021911565 = sum of:
        0.021911565 = product of:
          0.06573469 = sum of:
            0.06573469 = weight(_text_:22 in 5846) [ClassicSimilarity], result of:
              0.06573469 = score(doc=5846,freq=2.0), product of:
                0.16990048 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04851763 = queryNorm
                0.38690117 = fieldWeight in 5846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5846)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    4. 5.2020 17:22:40
  20. Engel, B.: Corona-Gesundheitszertifikat als Exitstrategie (2020) 0.01
    0.010955783 = product of:
      0.021911565 = sum of:
        0.021911565 = product of:
          0.06573469 = sum of:
            0.06573469 = weight(_text_:22 in 5906) [ClassicSimilarity], result of:
              0.06573469 = score(doc=5906,freq=2.0), product of:
                0.16990048 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04851763 = queryNorm
                0.38690117 = fieldWeight in 5906, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5906)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    4. 5.2020 17:22:28

Languages

  • e 80
  • d 32
  • pt 1
  • More… Less…

Types

  • a 104
  • el 21
  • m 6
  • p 2
  • x 1
  • More… Less…