Search (263 results, page 1 of 14)

  • × year_i:[2020 TO 2030}
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.08
    0.08344951 = sum of:
      0.06930768 = product of:
        0.20792302 = sum of:
          0.20792302 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
            0.20792302 = score(doc=1000,freq=2.0), product of:
              0.4439495 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052364815 = queryNorm
              0.46834838 = fieldWeight in 1000, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1000)
        0.33333334 = coord(1/3)
      0.014141837 = product of:
        0.028283674 = sum of:
          0.028283674 = weight(_text_:library in 1000) [ClassicSimilarity], result of:
            0.028283674 = score(doc=1000,freq=4.0), product of:
              0.13768692 = queryWeight, product of:
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.052364815 = queryNorm
              0.2054202 = fieldWeight in 1000, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1000)
        0.5 = coord(1/2)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  2. Bullard, J.; Dierking, A.; Grundner, A.: Centring LGBT2QIA+ subjects in knowledge organization systems (2020) 0.05
    0.045283623 = product of:
      0.090567246 = sum of:
        0.090567246 = sum of:
          0.047998987 = weight(_text_:library in 5996) [ClassicSimilarity], result of:
            0.047998987 = score(doc=5996,freq=8.0), product of:
              0.13768692 = queryWeight, product of:
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.052364815 = queryNorm
              0.34860963 = fieldWeight in 5996, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.046875 = fieldNorm(doc=5996)
          0.042568255 = weight(_text_:22 in 5996) [ClassicSimilarity], result of:
            0.042568255 = score(doc=5996,freq=2.0), product of:
              0.18337266 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052364815 = queryNorm
              0.23214069 = fieldWeight in 5996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5996)
      0.5 = coord(1/2)
    
    Abstract
    This paper contains a report of two interdependent knowledge organization (KO) projects for an LGBT2QIA+ library. The authors, in the context of volunteer library work for an independent library, redesigned the classification system and subject cataloguing guidelines to centre LGBT2QIA+ subjects. We discuss the priorities of creating and maintaining knowledge organization systems for a historically marginalized community and address the challenge that queer subjectivity poses to the goals of KO. The classification system features a focus on identity and physically reorganizes the library space in a way that accounts for the multiple and overlapping labels that constitute the currently articulated boundaries of this community. The subject heading system focuses on making visible topics and elements of identity made invisible by universal systems and by the newly implemented classification system. We discuss how this project may inform KO for other marginalized subjects, particularly through process and documentation that prioritizes transparency and the acceptance of an unfinished endpoint for queer KO.
    Date
    6.10.2020 21:22:33
  3. Morris, V.: Automated language identification of bibliographic resources (2020) 0.04
    0.0443785 = product of:
      0.088757 = sum of:
        0.088757 = sum of:
          0.031999324 = weight(_text_:library in 5749) [ClassicSimilarity], result of:
            0.031999324 = score(doc=5749,freq=2.0), product of:
              0.13768692 = queryWeight, product of:
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.052364815 = queryNorm
              0.23240642 = fieldWeight in 5749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.0625 = fieldNorm(doc=5749)
          0.056757677 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
            0.056757677 = score(doc=5749,freq=2.0), product of:
              0.18337266 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052364815 = queryNorm
              0.30952093 = fieldWeight in 5749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5749)
      0.5 = coord(1/2)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  4. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.041584603 = product of:
      0.08316921 = sum of:
        0.08316921 = product of:
          0.24950762 = sum of:
            0.24950762 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24950762 = score(doc=862,freq=2.0), product of:
                0.4439495 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052364815 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  5. Cheti, A.; Viti, E.: Functionality and merits of a faceted thesaurus : the case of the Nuovo soggettario (2023) 0.04
    0.038254336 = product of:
      0.07650867 = sum of:
        0.07650867 = sum of:
          0.033940412 = weight(_text_:library in 1181) [ClassicSimilarity], result of:
            0.033940412 = score(doc=1181,freq=4.0), product of:
              0.13768692 = queryWeight, product of:
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.052364815 = queryNorm
              0.24650425 = fieldWeight in 1181, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.046875 = fieldNorm(doc=1181)
          0.042568255 = weight(_text_:22 in 1181) [ClassicSimilarity], result of:
            0.042568255 = score(doc=1181,freq=2.0), product of:
              0.18337266 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052364815 = queryNorm
              0.23214069 = fieldWeight in 1181, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1181)
      0.5 = coord(1/2)
    
    Abstract
    The Nuovo soggettario, the official Italian subject indexing system edited by the National Central Library of Florence, is made up of interactive components, the core of which is a general thesaurus and some rules of a conventional syntax for subject string construction. The Nuovo soggettario Thesaurus is in compliance with ISO 25964: 2011-2013, IFLA LRM, and FAIR principle (findability, accessibility, interoperability, and reusability). Its open data are available in the Zthes, MARC21, and in SKOS formats and allow for interoperability with l library, archive, and museum databases. The Thesaurus's macrostructure is organized into four fundamental macro-categories, thirteen categories, and facets. The facets allow for the orderly development of hierarchies, thereby limiting polyhierarchies and promoting the grouping of homogenous concepts. This paper addresses the main features and peculiarities which have characterized the consistent development of this categorical structure and its effects on the syntactic sphere in a predominantly pre-coordinated usage context.
    Date
    26.11.2023 18:59:22
  6. Rae, A.R.; Mork, J.G.; Demner-Fushman, D.: ¬The National Library of Medicine indexer assignment dataset : a new large-scale dataset for reviewer assignment research (2023) 0.04
    0.03505692 = product of:
      0.07011384 = sum of:
        0.07011384 = sum of:
          0.03464029 = weight(_text_:library in 885) [ClassicSimilarity], result of:
            0.03464029 = score(doc=885,freq=6.0), product of:
              0.13768692 = queryWeight, product of:
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.052364815 = queryNorm
              0.25158736 = fieldWeight in 885, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.0390625 = fieldNorm(doc=885)
          0.035473548 = weight(_text_:22 in 885) [ClassicSimilarity], result of:
            0.035473548 = score(doc=885,freq=2.0), product of:
              0.18337266 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052364815 = queryNorm
              0.19345059 = fieldWeight in 885, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=885)
      0.5 = coord(1/2)
    
    Abstract
    MEDLINE is the National Library of Medicine's (NLM) journal citation database. It contains over 28 million references to biomedical and life science journal articles, and a key feature of the database is that all articles are indexed with NLM Medical Subject Headings (MeSH). The library employs a team of MeSH indexers, and in recent years they have been asked to index close to 1 million articles per year in order to keep MEDLINE up to date. An important part of the MEDLINE indexing process is the assignment of articles to indexers. High quality and timely indexing is only possible when articles are assigned to indexers with suitable expertise. This article introduces the NLM indexer assignment dataset: a large dataset of 4.2 million indexer article assignments for articles indexed between 2011 and 2019. The dataset is shown to be a valuable testbed for expert matching and assignment algorithms, and indexer article assignment is also found to be useful domain-adaptive pre-training for the closely related task of reviewer assignment.
    Date
    22. 1.2023 18:49:49
  7. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.03465384 = product of:
      0.06930768 = sum of:
        0.06930768 = product of:
          0.20792302 = sum of:
            0.20792302 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.20792302 = score(doc=5669,freq=2.0), product of:
                0.4439495 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052364815 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  8. Cooke, N.A.; Kitzie, V.L.: Outsiders-within-Library and Information Science : reprioritizing the marginalized in critical sociocultural work (2021) 0.03
    0.033283874 = product of:
      0.06656775 = sum of:
        0.06656775 = sum of:
          0.023999494 = weight(_text_:library in 351) [ClassicSimilarity], result of:
            0.023999494 = score(doc=351,freq=2.0), product of:
              0.13768692 = queryWeight, product of:
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.052364815 = queryNorm
              0.17430481 = fieldWeight in 351, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.046875 = fieldNorm(doc=351)
          0.042568255 = weight(_text_:22 in 351) [ClassicSimilarity], result of:
            0.042568255 = score(doc=351,freq=2.0), product of:
              0.18337266 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052364815 = queryNorm
              0.23214069 = fieldWeight in 351, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=351)
      0.5 = coord(1/2)
    
    Date
    18. 9.2021 13:22:27
  9. Wu, Z.; Li, R.; Zhou, Z.; Guo, J.; Jiang, J.; Su, X.: ¬A user sensitive subject protection approach for book search service (2020) 0.03
    0.031878613 = product of:
      0.063757226 = sum of:
        0.063757226 = sum of:
          0.028283674 = weight(_text_:library in 5617) [ClassicSimilarity], result of:
            0.028283674 = score(doc=5617,freq=4.0), product of:
              0.13768692 = queryWeight, product of:
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.052364815 = queryNorm
              0.2054202 = fieldWeight in 5617, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5617)
          0.035473548 = weight(_text_:22 in 5617) [ClassicSimilarity], result of:
            0.035473548 = score(doc=5617,freq=2.0), product of:
              0.18337266 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052364815 = queryNorm
              0.19345059 = fieldWeight in 5617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5617)
      0.5 = coord(1/2)
    
    Abstract
    In a digital library, book search is one of the most important information services. However, with the rapid development of network technologies such as cloud computing, the server-side of a digital library is becoming more and more untrusted; thus, how to prevent the disclosure of users' book query privacy is causing people's increasingly extensive concern. In this article, we propose to construct a group of plausible fake queries for each user book query to cover up the sensitive subjects behind users' queries. First, we propose a basic framework for the privacy protection in book search, which requires no change to the book search algorithm running on the server-side, and no compromise to the accuracy of book search. Second, we present a privacy protection model for book search to formulate the constraints that ideal fake queries should satisfy, that is, (i) the feature similarity, which measures the confusion effect of fake queries on users' queries, and (ii) the privacy exposure, which measures the cover-up effect of fake queries on users' sensitive subjects. Third, we discuss the algorithm implementation for the privacy model. Finally, the effectiveness of our approach is demonstrated by theoretical analysis and experimental evaluation.
    Date
    6. 1.2020 17:22:25
  10. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.03
    0.031878613 = product of:
      0.063757226 = sum of:
        0.063757226 = sum of:
          0.028283674 = weight(_text_:library in 950) [ClassicSimilarity], result of:
            0.028283674 = score(doc=950,freq=4.0), product of:
              0.13768692 = queryWeight, product of:
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.052364815 = queryNorm
              0.2054202 = fieldWeight in 950, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
          0.035473548 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
            0.035473548 = score(doc=950,freq=2.0), product of:
              0.18337266 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052364815 = queryNorm
              0.19345059 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
      0.5 = coord(1/2)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
  11. Vakkari, P.; Järvelin, K.; Chang, Y.-W.: ¬The association of disciplinary background with the evolution of topics and methods in Library and Information Science research 1995-2015 (2023) 0.03
    0.031878613 = product of:
      0.063757226 = sum of:
        0.063757226 = sum of:
          0.028283674 = weight(_text_:library in 998) [ClassicSimilarity], result of:
            0.028283674 = score(doc=998,freq=4.0), product of:
              0.13768692 = queryWeight, product of:
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.052364815 = queryNorm
              0.2054202 = fieldWeight in 998, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.0390625 = fieldNorm(doc=998)
          0.035473548 = weight(_text_:22 in 998) [ClassicSimilarity], result of:
            0.035473548 = score(doc=998,freq=2.0), product of:
              0.18337266 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052364815 = queryNorm
              0.19345059 = fieldWeight in 998, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=998)
      0.5 = coord(1/2)
    
    Abstract
    The paper reports a longitudinal analysis of the topical and methodological development of Library and Information Science (LIS). Its focus is on the effects of researchers' disciplines on these developments. The study extends an earlier cross-sectional study (Vakkari et al., Journal of the Association for Information Science and Technology, 2022a, 73, 1706-1722) by a coordinated dataset representing a content analysis of articles published in 31 scholarly LIS journals in 1995, 2005, and 2015. It is novel in its coverage of authors' disciplines, topical and methodological aspects in a coordinated dataset spanning two decades thus allowing trend analysis. The findings include a shrinking trend in the share of LIS from 67 to 36% while Computer Science, and Business and Economics increase their share from 9 and 6% to 21 and 16%, respectively. The earlier cross-sectional study (Vakkari et al., Journal of the Association for Information Science and Technology, 2022a, 73, 1706-1722) for the year 2015 identified three topical clusters of LIS research, focusing on topical subfields, methodologies, and contributing disciplines. Correspondence analysis confirms their existence already in 1995 and traces their development through the decades. The contributing disciplines infuse their concepts, research questions, and approaches to LIS and may also subsume vital parts of LIS in their own structures of knowledge production.
    Date
    22. 6.2023 18:15:06
  12. Yu, L.; Fan, Z.; Li, A.: ¬A hierarchical typology of scholarly information units : based on a deduction-verification study (2020) 0.03
    0.030189082 = product of:
      0.060378164 = sum of:
        0.060378164 = sum of:
          0.031999324 = weight(_text_:library in 5655) [ClassicSimilarity], result of:
            0.031999324 = score(doc=5655,freq=8.0), product of:
              0.13768692 = queryWeight, product of:
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.052364815 = queryNorm
              0.23240642 = fieldWeight in 5655, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.03125 = fieldNorm(doc=5655)
          0.028378839 = weight(_text_:22 in 5655) [ClassicSimilarity], result of:
            0.028378839 = score(doc=5655,freq=2.0), product of:
              0.18337266 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052364815 = queryNorm
              0.15476047 = fieldWeight in 5655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5655)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to lay a theoretical foundation for identifying operational information units for library and information professional activities in the context of scholarly communication. Design/methodology/approach The study adopts a deduction-verification approach to formulate a typology of units for scholarly information. It first deduces possible units from an existing conceptualization of information, which defines information as the combined product of data and meaning, and then tests the usefulness of these units via two empirical investigations, one with a group of scholarly papers and the other with a sample of scholarly information users. Findings The results show that, on defining an information unit as a piece of information that is complete in both data and meaning, to such an extent that it remains meaningful to its target audience when retrieved and displayed independently in a database, it is then possible to formulate a hierarchical typology of units for scholarly information. The typology proposed in this study consists of three levels, which in turn, consists of 1, 5 and 44 units, respectively. Research limitations/implications The result of this study has theoretical implications on both the philosophical and conceptual levels: on the philosophical level, it hinges on, and reinforces the objective view of information; on the conceptual level, it challenges the conceptualization of work by IFLA's Functional Requirements for Bibliographic Records and Library Reference Model but endorses that by Library of Congress's BIBFRAME 2.0 model. Practical implications It calls for reconsideration of existing operational units in a variety of library and information activities. Originality/value The study strengthens the conceptual foundation of operational information units and brings to light the primacy of "one work" as an information unit and the possibility for it to be supplemented by smaller units.
    Date
    14. 1.2020 11:15:22
  13. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.03
    0.02550289 = product of:
      0.05100578 = sum of:
        0.05100578 = sum of:
          0.02262694 = weight(_text_:library in 566) [ClassicSimilarity], result of:
            0.02262694 = score(doc=566,freq=4.0), product of:
              0.13768692 = queryWeight, product of:
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.052364815 = queryNorm
              0.16433616 = fieldWeight in 566, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6293786 = idf(docFreq=8668, maxDocs=44218)
                0.03125 = fieldNorm(doc=566)
          0.028378839 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
            0.028378839 = score(doc=566,freq=2.0), product of:
              0.18337266 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052364815 = queryNorm
              0.15476047 = fieldWeight in 566, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=566)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge Architectures reviews traditional approaches to managing information and explains why they need to adapt to support 21st-century information management and discovery. Exploring the rapidly changing environment in which information is being managed and accessed, the book considers how to use knowledge architectures, the basic structures and designs that underlie all of the parts of an effective information system, to best advantage. Drawing on 40 years of work with a variety of organizations, Bedford explains that failure to understand the structure behind any given system can be the difference between an effective solution and a significant and costly failure. Demonstrating that the information user environment has shifted significantly in the past 20 years, the book explains that end users now expect designs and behaviors that are much closer to the way they think, work, and act. Acknowledging how important it is that those responsible for developing an information or knowledge management system understand knowledge structures, the book goes beyond a traditional library science perspective and uses case studies to help translate the abstract and theoretical to the practical and concrete. Explaining the structures in a simple and intuitive way and providing examples that clearly illustrate the challenges faced by a range of different organizations, Knowledge Architectures is essential reading for those studying and working in library and information science, data science, systems development, database design, and search system architecture and engineering.
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together
  14. ¬Der Student aus dem Computer (2023) 0.02
    0.024831483 = product of:
      0.049662966 = sum of:
        0.049662966 = product of:
          0.09932593 = sum of:
            0.09932593 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09932593 = score(doc=1079,freq=2.0), product of:
                0.18337266 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052364815 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  15. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.02
    0.021284128 = product of:
      0.042568255 = sum of:
        0.042568255 = product of:
          0.08513651 = sum of:
            0.08513651 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.08513651 = score(doc=4156,freq=2.0), product of:
                0.18337266 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052364815 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 14:08:22
  16. Ibrahim, G.M.; Taylor, M.: Krebszellen manipulieren Neurone : Gliome (2023) 0.02
    0.021284128 = product of:
      0.042568255 = sum of:
        0.042568255 = product of:
          0.08513651 = sum of:
            0.08513651 = weight(_text_:22 in 1203) [ClassicSimilarity], result of:
              0.08513651 = score(doc=1203,freq=2.0), product of:
                0.18337266 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052364815 = queryNorm
                0.46428138 = fieldWeight in 1203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1203)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2023, H.10, S.22-24
  17. Handis, M.W.: Greek subject and name authorities, and the Library of Congress (2020) 0.02
    0.019798571 = product of:
      0.039597142 = sum of:
        0.039597142 = product of:
          0.079194285 = sum of:
            0.079194285 = weight(_text_:library in 5801) [ClassicSimilarity], result of:
              0.079194285 = score(doc=5801,freq=16.0), product of:
                0.13768692 = queryWeight, product of:
                  2.6293786 = idf(docFreq=8668, maxDocs=44218)
                  0.052364815 = queryNorm
                0.57517654 = fieldWeight in 5801, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  2.6293786 = idf(docFreq=8668, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5801)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Some international libraries are still using the Anglo-American Cataloging Rules, 2nd edition revised, for cataloging even though the Library of Congress and other large libraries have retired it in favor of Resource Description and Access. One of these libraries is the National Library of Greece, which consults the Library of Congress database before establishing authorities. There are cultural differences in names and subjects between the Library of Congress and the National Library, but some National Library terms may be more appropriate for users than the Library of Congress-established forms.
  18. Koch, C.: Was ist Bewusstsein? (2020) 0.02
    0.017736774 = product of:
      0.035473548 = sum of:
        0.035473548 = product of:
          0.070947096 = sum of:
            0.070947096 = weight(_text_:22 in 5723) [ClassicSimilarity], result of:
              0.070947096 = score(doc=5723,freq=2.0), product of:
                0.18337266 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052364815 = queryNorm
                0.38690117 = fieldWeight in 5723, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5723)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    17. 1.2020 22:15:11
  19. Wagner, E.: Über Impfstoffe zur digitalen Identität? (2020) 0.02
    0.017736774 = product of:
      0.035473548 = sum of:
        0.035473548 = product of:
          0.070947096 = sum of:
            0.070947096 = weight(_text_:22 in 5846) [ClassicSimilarity], result of:
              0.070947096 = score(doc=5846,freq=2.0), product of:
                0.18337266 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052364815 = queryNorm
                0.38690117 = fieldWeight in 5846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 5.2020 17:22:40
  20. Engel, B.: Corona-Gesundheitszertifikat als Exitstrategie (2020) 0.02
    0.017736774 = product of:
      0.035473548 = sum of:
        0.035473548 = product of:
          0.070947096 = sum of:
            0.070947096 = weight(_text_:22 in 5906) [ClassicSimilarity], result of:
              0.070947096 = score(doc=5906,freq=2.0), product of:
                0.18337266 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052364815 = queryNorm
                0.38690117 = fieldWeight in 5906, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5906)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 5.2020 17:22:28

Languages

  • e 223
  • d 39

Types

  • a 249
  • el 38
  • m 8
  • p 2
  • s 1
  • x 1
  • More… Less…