Search (167 results, page 1 of 9)

  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.15
    0.14815785 = product of:
      0.2963157 = sum of:
        0.074078925 = product of:
          0.22223677 = sum of:
            0.22223677 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.22223677 = score(doc=862,freq=2.0), product of:
                0.3954264 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04664141 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.22223677 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.22223677 = score(doc=862,freq=2.0), product of:
            0.3954264 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04664141 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(2/4)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.12
    0.123464875 = product of:
      0.24692975 = sum of:
        0.061732437 = product of:
          0.18519731 = sum of:
            0.18519731 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.18519731 = score(doc=1000,freq=2.0), product of:
                0.3954264 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04664141 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.18519731 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.18519731 = score(doc=1000,freq=2.0), product of:
            0.3954264 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04664141 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(2/4)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Radford, M.L.; Costello, L.; Montague, K.E.: "Death of social encounters" : investigating COVID-19's initial impact on virtual reference services in academic libraries (2022) 0.05
    0.045194387 = product of:
      0.090388775 = sum of:
        0.060311824 = weight(_text_:reference in 749) [ClassicSimilarity], result of:
          0.060311824 = score(doc=749,freq=4.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.31784135 = fieldWeight in 749, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=749)
        0.030076953 = product of:
          0.060153905 = sum of:
            0.060153905 = weight(_text_:services in 749) [ClassicSimilarity], result of:
              0.060153905 = score(doc=749,freq=6.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.3512885 = fieldWeight in 749, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=749)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This investigation explores the initial impact of the COVID-19 pandemic on live chat virtual reference services (VRS) in academic libraries and on user behaviors from March to December 2020 using Goffman's theoretical framework (1956, 1967, 1971). Data from 300 responses by academic librarians to two longitudinal online surveys and 28 semi-structured interviews were quantitatively and qualitatively analyzed. Results revealed that academic librarians were well-positioned to provide VRS as university information hubs during pandemic shutdowns. Qualitative analysis revealed that participants received gratitude for VRS help, but also experienced frustrations and angst with limited accessibility during COVID-19. Participants reported changes including VRS volume, level of complexity, and question topics. Results reveal the range and frequency of new services with librarians striving to make personal connections with users through VRS, video consultations, video chat, and other strategies. Participants found it difficult to maintain these connections, coping through grit and mutual support when remote work became necessary. They adapted to challenges, including isolation, technology learning curves, and disrupted work routines. Librarians' responses chronicle their innovative approaches, fierce determination, emotional labor, and dedication to helping users and colleagues through this unprecedented time. Results have vital implications for the future of VRS.
  4. Provost, A. Le; Nicolas, .: IdRef, Paprika and Qualinka : atoolbox for authority data quality and interoperability (2020) 0.04
    0.042008284 = product of:
      0.08401657 = sum of:
        0.059705656 = weight(_text_:reference in 1076) [ClassicSimilarity], result of:
          0.059705656 = score(doc=1076,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.31464687 = fieldWeight in 1076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1076)
        0.024310911 = product of:
          0.048621822 = sum of:
            0.048621822 = weight(_text_:services in 1076) [ClassicSimilarity], result of:
              0.048621822 = score(doc=1076,freq=2.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.28394312 = fieldWeight in 1076, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1076)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Authority data has always been at the core of library catalogues. Today, authority data is reference data on a wider scale. The former authorities of the "Sudoc" union catalogue mutated into "IdRef", a read/write platform of open data and services which seeks to become a national supplier of reliable identifiers for French universities. To support their dissemination and comply with high quality standards, Paprika and Qualinka have been added to our toolbox, to expedite the massive and secure linking of scientific publications to IdRef authorities.
  5. Golub, K.; Tyrkkö, J.; Hansson, J.; Ahlström, I.: Subject indexing in humanities : a comparison between a local university repository and an international bibliographic service (2020) 0.04
    0.038688384 = product of:
      0.07737677 = sum of:
        0.0426469 = weight(_text_:reference in 5982) [ClassicSimilarity], result of:
          0.0426469 = score(doc=5982,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 5982, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5982)
        0.034729872 = product of:
          0.069459744 = sum of:
            0.069459744 = weight(_text_:services in 5982) [ClassicSimilarity], result of:
              0.069459744 = score(doc=5982,freq=8.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.405633 = fieldWeight in 5982, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5982)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    As the humanities develop in the realm of increasingly more pronounced digital scholarship, it is important to provide quality subject access to a vast range of heterogeneous information objects in digital services. The study aims to paint a representative picture of the current state of affairs of the use of subject index terms in humanities journal articles with particular reference to the well-established subject access needs of humanities researchers, with the purpose of identifying which improvements are needed in this context. Design/methodology/approach The comparison of subject metadata on a sample of 649 peer-reviewed journal articles from across the humanities is conducted in a university repository, against Scopus, the former reflecting local and national policies and the latter being the most comprehensive international abstract and citation database of research output. Findings The study shows that established bibliographic objectives to ensure subject access for humanities journal articles are not supported in either the world's largest commercial abstract and citation database Scopus or the local repository of a public university in Sweden. The indexing policies in the two services do not seem to address the needs of humanities scholars for highly granular subject index terms with appropriate facets; no controlled vocabularies for any humanities discipline are used whatsoever. Originality/value In all, not much has changed since 1990s when indexing for the humanities was shown to lag behind the sciences. The community of researchers and information professionals, today working together on digital humanities projects, as well as interdisciplinary research teams, should demand that their subject access needs be fulfilled, especially in commercial services like Scopus and discovery services.
  6. Tang, X.-B.; Fu, W.-G.; Liu, Y.: Knowledge big graph fusing ontology with property graph : a case study of financial ownership network (2021) 0.03
    0.033602312 = product of:
      0.067204624 = sum of:
        0.0426469 = weight(_text_:reference in 234) [ClassicSimilarity], result of:
          0.0426469 = score(doc=234,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 234, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=234)
        0.024557726 = product of:
          0.049115453 = sum of:
            0.049115453 = weight(_text_:services in 234) [ClassicSimilarity], result of:
              0.049115453 = score(doc=234,freq=4.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.28682584 = fieldWeight in 234, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=234)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The scale of knowledge is growing rapidly in the big data environment, and traditional knowledge organization and services have faced the dilemma of semantic inaccuracy and untimeliness. From a knowledge fusion perspective-combining the precise semantic superiority of traditional ontology with the large-scale graph processing power and the predicate attribute expression ability of property graph-this paper presents an ontology and property graph fusion framework (OPGFF). The fusion process is divided into content layer fusion and constraint layer fusion. The result of the fusion, that is, the knowledge representation model is called knowledge big graph. In addition, this paper applies the knowledge big graph model to the ownership network in the China's financial field and builds a financial ownership knowledge big graph. Furthermore, this paper designs and implements six consistency inference algorithms for finding contradictory data and filling in missing data in the financial ownership knowledge big graph, five of which are completely domain agnostic. The correctness and validity of the algorithms have been experimentally verified with actual data. The fusion OPGFF framework and the implementation method of the knowledge big graph could provide technical reference for big data knowledge organization and services.
  7. Hartel, J.: ¬The red thread of information (2020) 0.03
    0.029222533 = product of:
      0.058445066 = sum of:
        0.0426469 = weight(_text_:reference in 5839) [ClassicSimilarity], result of:
          0.0426469 = score(doc=5839,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 5839, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5839)
        0.015798168 = product of:
          0.031596337 = sum of:
            0.031596337 = weight(_text_:22 in 5839) [ClassicSimilarity], result of:
              0.031596337 = score(doc=5839,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.19345059 = fieldWeight in 5839, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5839)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose In The Invisible Substrate of Information Science, a landmark article about the discipline of information science, Marcia J. Bates wrote that ".we are always looking for the red thread of information in the social texture of people's lives" (1999a, p. 1048). To sharpen our understanding of information science and to elaborate Bates' idea, the work at hand answers the question: Just what does the red thread of information entail? Design/methodology/approach Through a close reading of Bates' oeuvre and by applying concepts from the reference literature of information science, nine composite entities that qualify as the red thread of information are identified, elaborated, and related to existing concepts in the information science literature. In the spirit of a scientist-poet (White, 1999), several playful metaphors related to the color red are employed. Findings Bates' red thread of information entails: terms, genres, literatures, classification systems, scholarly communication, information retrieval, information experience, information institutions, and information policy. This same constellation of phenomena can be found in resonant visions of information science, namely, domain analysis (Hjørland, 2002), ethnography of infrastructure (Star, 1999), and social epistemology (Shera, 1968). Research limitations/implications With the vital vermilion filament in clear view, newcomers can more easily engage the material, conceptual, and social machinery of information science, and specialists are reminded of what constitutes information science as a whole. Future researchers and scientist-poets may wish to supplement the nine composite entities with additional, emergent information phenomena. Originality/value Though the explication of information science that follows is relatively orthodox and time-bound, the paper offers an imaginative, accessible, yet technically precise way of understanding the field.
    Date
    30. 4.2020 21:03:22
  8. Das, S.; Bagchi, M.; Hussey, P.: How to teach domain ontology-based knowledge graph construction? : an Irish experiment (2023) 0.03
    0.029222533 = product of:
      0.058445066 = sum of:
        0.0426469 = weight(_text_:reference in 1126) [ClassicSimilarity], result of:
          0.0426469 = score(doc=1126,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 1126, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1126)
        0.015798168 = product of:
          0.031596337 = sum of:
            0.031596337 = weight(_text_:22 in 1126) [ClassicSimilarity], result of:
              0.031596337 = score(doc=1126,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.19345059 = fieldWeight in 1126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1126)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Domains represent concepts which belong to specific parts of the world. The particularized meaning of words linguistically encoding such domain concepts are provided by domain specific resources. The explicit meaning of such words are increasingly captured computationally using domain-specific ontologies, which, even for the same reference domain, are most often than not semantically incompatible. As information systems that rely on domain ontologies expand, there is a growing need to not only design domain ontologies and domain ontology-grounded Knowl­edge Graphs (KGs) but also to align them to general standards and conventions for interoperability. This often presents an insurmountable challenge to domain experts who have to additionally learn the construction of domain ontologies and KGs. Until now, several research methodologies have been proposed by different research groups using different technical approaches and based on scenarios of different domains of application. However, no methodology has been proposed which not only facilitates designing conceptually well-founded ontologies, but is also, equally, grounded in the general pedagogical principles of knowl­edge organization and, thereby, flexible enough to teach, and reproduce vis-à-vis domain experts. The purpose of this paper is to provide such a general, pedagogically flexible semantic knowl­edge modelling methodology. We exemplify the methodology by examples and illustrations from a professional-level digital healthcare course, and conclude with an evaluation grounded in technological parameters as well as user experience design principles.
    Date
    20.11.2023 17:19:22
  9. Eadon, Y.M.: ¬(Not) part of the system : resolving epistemic disconnect through archival reference (2020) 0.03
    0.02860841 = product of:
      0.11443364 = sum of:
        0.11443364 = weight(_text_:reference in 23) [ClassicSimilarity], result of:
          0.11443364 = score(doc=23,freq=10.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.60306156 = fieldWeight in 23, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=23)
      0.25 = coord(1/4)
    
    Abstract
    Information seeking practices of conspiracists are examined by introducing the new archival user group of "conspiracist researchers." The epistemic commitments of archival knowledge organization (AKO), rooted in provenance and access/secrecy, fundamentally conflict with the epistemic features of conspiracism, namely: mistrust of authority figures and institutions, accompanying overreliance on firsthand inquiry, and a tendency towards indicative mood/confirmation bias. Through interviews with reference personnel working at two state archives in the American west, I illustrate that the reference interaction is a vital turning point for the conspiracist researcher. Reference personnel can build trust with conspiracist researchers by displaying epistemic empathy and subverting hegemonic archival logics. The burden of bridging the epistemic gap through archival user education thus falls almost exclusively onto reference personnel. Domain analysis is presented as one possible starting point for developing an archival knowledge organization system (AKOS) that could be more epistemically flexible.
  10. Golub, K.; Ziolkowski, P.M.; Zlodi, G.: Organizing subject access to cultural heritage in Swedish online museums (2022) 0.02
    0.024004735 = product of:
      0.04800947 = sum of:
        0.03411752 = weight(_text_:reference in 688) [ClassicSimilarity], result of:
          0.03411752 = score(doc=688,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.17979822 = fieldWeight in 688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=688)
        0.013891948 = product of:
          0.027783897 = sum of:
            0.027783897 = weight(_text_:services in 688) [ClassicSimilarity], result of:
              0.027783897 = score(doc=688,freq=2.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.1622532 = fieldWeight in 688, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.03125 = fieldNorm(doc=688)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The study aims to paint a representative picture of the current state of search interfaces of Swedish online museum collections, focussing on search functionalities with particular reference to subject searching, as well as the use of controlled vocabularies, with the purpose of identifying which improvements of the search interfaces are needed to ensure high-quality information retrieval for the end user. Design/methodology/approach In the first step, a set of 21 search interface criteria was identified, based on related research and current standards in the domain of cultural heritage knowledge organization. Secondly, a complete set of Swedish museums that provide online access to their collections was identified, comprising nine cross-search services and 91 individual museums' websites. These 100 websites were each evaluated against the 21 criteria, between 1 July and 31 August 2020. Findings Although many standards and guidelines are in place to ensure quality-controlled subject indexing, which in turn support information retrieval of relevant resources (as individual or full search results), the study shows that they are not broadly implemented, resulting in information retrieval failures for the end user. The study also demonstrates a strong need for the implementation of controlled vocabularies in these museums. Originality/value This study is a rare piece of research which examines subject searching in online museums; the 21 search criteria and their use in the analysis of the complete set of online collections of a country represents a considerable and unique contribution to the fields of knowledge organization and information retrieval of cultural heritage. Its particular value lies in showing how the needs of end users, many of which are documented and reflected in international standards and guidelines, should be taken into account in designing search tools for these museums; especially so in subject searching, which is the most complex and yet the most common type of search. Much effort has been invested into digitizing cultural heritage collections, but access to them is hindered by poor search functionality. This study identifies which are the most important aspects to improve.
  11. Yu, L.; Fan, Z.; Li, A.: ¬A hierarchical typology of scholarly information units : based on a deduction-verification study (2020) 0.02
    0.023378028 = product of:
      0.046756055 = sum of:
        0.03411752 = weight(_text_:reference in 5655) [ClassicSimilarity], result of:
          0.03411752 = score(doc=5655,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.17979822 = fieldWeight in 5655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=5655)
        0.012638534 = product of:
          0.025277069 = sum of:
            0.025277069 = weight(_text_:22 in 5655) [ClassicSimilarity], result of:
              0.025277069 = score(doc=5655,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.15476047 = fieldWeight in 5655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5655)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The purpose of this paper is to lay a theoretical foundation for identifying operational information units for library and information professional activities in the context of scholarly communication. Design/methodology/approach The study adopts a deduction-verification approach to formulate a typology of units for scholarly information. It first deduces possible units from an existing conceptualization of information, which defines information as the combined product of data and meaning, and then tests the usefulness of these units via two empirical investigations, one with a group of scholarly papers and the other with a sample of scholarly information users. Findings The results show that, on defining an information unit as a piece of information that is complete in both data and meaning, to such an extent that it remains meaningful to its target audience when retrieved and displayed independently in a database, it is then possible to formulate a hierarchical typology of units for scholarly information. The typology proposed in this study consists of three levels, which in turn, consists of 1, 5 and 44 units, respectively. Research limitations/implications The result of this study has theoretical implications on both the philosophical and conceptual levels: on the philosophical level, it hinges on, and reinforces the objective view of information; on the conceptual level, it challenges the conceptualization of work by IFLA's Functional Requirements for Bibliographic Records and Library Reference Model but endorses that by Library of Congress's BIBFRAME 2.0 model. Practical implications It calls for reconsideration of existing operational units in a variety of library and information activities. Originality/value The study strengthens the conceptual foundation of operational information units and brings to light the primacy of "one work" as an information unit and the possibility for it to be supplemented by smaller units.
    Date
    14. 1.2020 11:15:22
  12. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.02
    0.023378028 = product of:
      0.046756055 = sum of:
        0.03411752 = weight(_text_:reference in 566) [ClassicSimilarity], result of:
          0.03411752 = score(doc=566,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.17979822 = fieldWeight in 566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=566)
        0.012638534 = product of:
          0.025277069 = sum of:
            0.025277069 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
              0.025277069 = score(doc=566,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.15476047 = fieldWeight in 566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=566)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together
  13. Williams, B.: Dimensions & VOSViewer bibliometrics in the reference interview (2020) 0.02
    0.02110914 = product of:
      0.08443656 = sum of:
        0.08443656 = weight(_text_:reference in 5719) [ClassicSimilarity], result of:
          0.08443656 = score(doc=5719,freq=4.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.4449779 = fieldWeight in 5719, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5719)
      0.25 = coord(1/4)
    
    Abstract
    The VOSviewer software provides easy access to bibliometric mapping using data from Dimensions, Scopus and Web of Science. The properly formatted and structured citation data, and the ease in which it can be exported open up new avenues for use during citation searches and eference interviews. This paper details specific techniques for using advanced searches in Dimensions, exporting the citation data, and drawing insights from the maps produced in VOS Viewer. These search techniques and data export practices are fast and accurate enough to build into reference interviews for graduate students, faculty, and post-PhD researchers. The search results derived from them are accurate and allow a more comprehensive view of citation networks embedded in ordinary complex boolean searches.
  14. Palsdottir, A.: Data literacy and management of research data : a prerequisite for the sharing of research data (2021) 0.02
    0.018350048 = product of:
      0.07340019 = sum of:
        0.07340019 = sum of:
          0.048123125 = weight(_text_:services in 183) [ClassicSimilarity], result of:
            0.048123125 = score(doc=183,freq=6.0), product of:
              0.1712379 = queryWeight, product of:
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.04664141 = queryNorm
              0.2810308 = fieldWeight in 183, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.03125 = fieldNorm(doc=183)
          0.025277069 = weight(_text_:22 in 183) [ClassicSimilarity], result of:
            0.025277069 = score(doc=183,freq=2.0), product of:
              0.16333027 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04664141 = queryNorm
              0.15476047 = fieldWeight in 183, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=183)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to investigate the knowledge and attitude about research data management, the use of data management methods and the perceived need for support, in relation to participants' field of research. Design/methodology/approach This is a quantitative study. Data were collected by an email survey and sent to 792 academic researchers and doctoral students. Total response rate was 18% (N = 139). The measurement instrument consisted of six sets of questions: about data management plans, the assignment of additional information to research data, about metadata, standard file naming systems, training at data management methods and the storing of research data. Findings The main finding is that knowledge about the procedures of data management is limited, and data management is not a normal practice in the researcher's work. They were, however, in general, of the opinion that the university should take the lead by recommending and offering access to the necessary tools of data management. Taken together, the results indicate that there is an urgent need to increase the researcher's understanding of the importance of data management that is based on professional knowledge and to provide them with resources and training that enables them to make effective and productive use of data management methods. Research limitations/implications The survey was sent to all members of the population but not a sample of it. Because of the response rate, the results cannot be generalized to all researchers at the university. Nevertheless, the findings may provide an important understanding about their research data procedures, in particular what characterizes their knowledge about data management and attitude towards it. Practical implications Awareness of these issues is essential for information specialists at academic libraries, together with other units within the universities, to be able to design infrastructures and develop services that suit the needs of the research community. The findings can be used, to develop data policies and services, based on professional knowledge of best practices and recognized standards that assist the research community at data management. Originality/value The study contributes to the existing literature about research data management by examining the results by participants' field of research. Recognition of the issues is critical in order for information specialists in collaboration with universities to design relevant infrastructures and services for academics and doctoral students that can promote their research data management.
    Date
    20. 1.2015 18:30:22
  15. Rockelle Strader, C.: Cataloging to support information literacy : the IFLA Library Reference Model's user tasks in the context of the Framework for Information Literacy for Higher Education (2021) 0.02
    0.018093549 = product of:
      0.072374195 = sum of:
        0.072374195 = weight(_text_:reference in 713) [ClassicSimilarity], result of:
          0.072374195 = score(doc=713,freq=4.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.38140965 = fieldWeight in 713, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=713)
      0.25 = coord(1/4)
    
    Abstract
    Cataloging practices, as exemplified by the five user tasks of the IFLA Library Reference Model, can support information literacy practices. The six frames of the Framework for Information Literacy for Higher Education are used as lenses to examine the user tasks. Two themes emerge from this examination: context matters, and catalogers must tailor bibliographic descriptions to meet users' expectations and information needs. Catalogers need to solicit feedback from various user communities to reform cataloging practices to remain current and viable. Such conversations will enrich the catalog and enhance (reclaim?) its position as a primary tool for research and learning. Supplemental data for this article is available online at https://doi.org/10.1080/01639374.2021.1939828.
  16. Wu, Z.; Li, R.; Zhou, Z.; Guo, J.; Jiang, J.; Su, X.: ¬A user sensitive subject protection approach for book search service (2020) 0.02
    0.016581552 = product of:
      0.06632621 = sum of:
        0.06632621 = sum of:
          0.034729872 = weight(_text_:services in 5617) [ClassicSimilarity], result of:
            0.034729872 = score(doc=5617,freq=2.0), product of:
              0.1712379 = queryWeight, product of:
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.04664141 = queryNorm
              0.2028165 = fieldWeight in 5617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5617)
          0.031596337 = weight(_text_:22 in 5617) [ClassicSimilarity], result of:
            0.031596337 = score(doc=5617,freq=2.0), product of:
              0.16333027 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04664141 = queryNorm
              0.19345059 = fieldWeight in 5617, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5617)
      0.25 = coord(1/4)
    
    Abstract
    In a digital library, book search is one of the most important information services. However, with the rapid development of network technologies such as cloud computing, the server-side of a digital library is becoming more and more untrusted; thus, how to prevent the disclosure of users' book query privacy is causing people's increasingly extensive concern. In this article, we propose to construct a group of plausible fake queries for each user book query to cover up the sensitive subjects behind users' queries. First, we propose a basic framework for the privacy protection in book search, which requires no change to the book search algorithm running on the server-side, and no compromise to the accuracy of book search. Second, we present a privacy protection model for book search to formulate the constraints that ideal fake queries should satisfy, that is, (i) the feature similarity, which measures the confusion effect of fake queries on users' queries, and (ii) the privacy exposure, which measures the cover-up effect of fake queries on users' sensitive subjects. Third, we discuss the algorithm implementation for the privacy model. Finally, the effectiveness of our approach is demonstrated by theoretical analysis and experimental evaluation.
    Date
    6. 1.2020 17:22:25
  17. Hjoerland, B.: Table of contents (ToC) (2022) 0.02
    0.016581552 = product of:
      0.06632621 = sum of:
        0.06632621 = sum of:
          0.034729872 = weight(_text_:services in 1096) [ClassicSimilarity], result of:
            0.034729872 = score(doc=1096,freq=2.0), product of:
              0.1712379 = queryWeight, product of:
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.04664141 = queryNorm
              0.2028165 = fieldWeight in 1096, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1096)
          0.031596337 = weight(_text_:22 in 1096) [ClassicSimilarity], result of:
            0.031596337 = score(doc=1096,freq=2.0), product of:
              0.16333027 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04664141 = queryNorm
              0.19345059 = fieldWeight in 1096, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1096)
      0.25 = coord(1/4)
    
    Abstract
    A table of contents (ToC) is a kind of document representation as well as a paratext and a kind of finding device to the document it represents. TOCs are very common in books and some other kinds of documents, but not in all kinds. This article discusses the definition and functions of ToC, normative guidelines for their design, and the history and forms of ToC in different kinds of documents and media. A main part of the article is about the role of ToC in information searching, in current awareness services and as items added to bibliographical records. The introduction and the conclusion focus on the core theoretical issues concerning ToCs. Should they be document-oriented or request-oriented, neutral, or policy-oriented, objective, or subjective? It is concluded that because of the special functions of ToCs, the arguments for the request-oriented (policy-oriented, subjective) view are weaker than they are in relation to indexing and knowledge organization in general. Apart from level of granularity, the evaluation of a ToC is difficult to separate from the evaluation of the structuring and naming of the elements of the structure of the document it represents.
    Date
    18.11.2023 13:47:22
  18. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.02
    0.015433109 = product of:
      0.061732437 = sum of:
        0.061732437 = product of:
          0.18519731 = sum of:
            0.18519731 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.18519731 = score(doc=5669,freq=2.0), product of:
                0.3954264 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04664141 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  19. Yang, T.-H.; Hsieh, Y.-L.; Liu, S.-H.; Chang, Y.-C.; Hsu, W.-L.: ¬A flexible template generation and matching method with applications for publication reference metadata extraction (2021) 0.02
    0.015077956 = product of:
      0.060311824 = sum of:
        0.060311824 = weight(_text_:reference in 63) [ClassicSimilarity], result of:
          0.060311824 = score(doc=63,freq=4.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.31784135 = fieldWeight in 63, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=63)
      0.25 = coord(1/4)
    
    Abstract
    Conventional rule-based approaches use exact template matching to capture linguistic information and necessarily need to enumerate all variations. We propose a novel flexible template generation and matching scheme called the principle-based approach (PBA) based on sequence alignment, and employ it for reference metadata extraction (RME) to demonstrate its effectiveness. The main contributions of this research are threefold. First, we propose an automatic template generation that can capture prominent patterns using the dominating set algorithm. Second, we devise an alignment-based template-matching technique that uses a logistic regression model, which makes it more general and flexible than pure rule-based approaches. Last, we apply PBA to RME on extensive cross-domain corpora and demonstrate its robustness and generality. Experiments reveal that the same set of templates produced by the PBA framework not only deliver consistent performance on various unseen domains, but also surpass hand-crafted knowledge (templates). We use four independent journal style test sets and one conference style test set in the experiments. When compared to renowned machine learning methods, such as conditional random fields (CRF), as well as recent deep learning methods (i.e., bi-directional long short-term memory with a CRF layer, Bi-LSTM-CRF), PBA has the best performance for all datasets.
  20. Kelly, M.: Epistemology, epistemic belief, personal epistemology, and epistemics : a review of concepts as they impact information behavior research (2021) 0.02
    0.015077956 = product of:
      0.060311824 = sum of:
        0.060311824 = weight(_text_:reference in 170) [ClassicSimilarity], result of:
          0.060311824 = score(doc=170,freq=4.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.31784135 = fieldWeight in 170, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=170)
      0.25 = coord(1/4)
    
    Abstract
    A review of a range of epistemic concepts that are commonly researched was conducted with reference to conventional epistemology and with reference to foundational approaches to justification. These were assessed in relation to previous research undertaken linking information behavior and experience with paradigm, metatheory, and discourse. This research assesses how the epistemic concept is treated, both within information science and within disciplines that have affinities to the topics or agents that have been the subject of inquiry within the field. An attempt is made to clarify the types of connections that are associated with the epistemic concept and to provide a clearer view of how research focused on information behavior might consider the questions underpinning assumptions relating to knowledge and knowing. The symbiotic connection between epistemics and information science is advanced as a suitably nuanced conception of socially organized knowledge from which to define the appropriate level at which knowledge claims can be usefully advanced. It is proposed that fostering a better understanding of epistemics as a research practice might also provide for the development of a range of insights and methods that reflect the dynamic context within which the study of information behavior and information experience is located.

Languages

  • e 135
  • d 32

Types

  • a 155
  • el 25
  • m 6
  • p 2
  • x 1
  • More… Less…