Search (343 results, page 1 of 18)

  • × year_i:[2020 TO 2030}
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.21
    0.20821986 = product of:
      0.4164397 = sum of:
        0.052449387 = product of:
          0.15734816 = sum of:
            0.15734816 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.15734816 = score(doc=1000,freq=2.0), product of:
                0.3359639 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03962768 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.15734816 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.15734816 = score(doc=1000,freq=2.0), product of:
            0.3359639 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962768 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.049294014 = weight(_text_:studies in 1000) [ClassicSimilarity], result of:
          0.049294014 = score(doc=1000,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.3117402 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.15734816 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.15734816 = score(doc=1000,freq=2.0), product of:
            0.3359639 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962768 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(4/8)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.17
    0.16521555 = product of:
      0.4405748 = sum of:
        0.06293926 = product of:
          0.18881777 = sum of:
            0.18881777 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.18881777 = score(doc=862,freq=2.0), product of:
                0.3359639 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03962768 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.18881777 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18881777 = score(doc=862,freq=2.0), product of:
            0.3359639 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962768 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.18881777 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18881777 = score(doc=862,freq=2.0), product of:
            0.3359639 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962768 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.375 = coord(3/8)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Detlor, B.; Julien, H.; Rose, T. La; Serenko, A.: Community-led digital literacy training : toward a conceptual framework (2022) 0.04
    0.04146699 = product of:
      0.11057864 = sum of:
        0.033409793 = weight(_text_:libraries in 662) [ClassicSimilarity], result of:
          0.033409793 = score(doc=662,freq=4.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.25664487 = fieldWeight in 662, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=662)
        0.042312715 = weight(_text_:case in 662) [ClassicSimilarity], result of:
          0.042312715 = score(doc=662,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 662, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=662)
        0.034856133 = weight(_text_:studies in 662) [ClassicSimilarity], result of:
          0.034856133 = score(doc=662,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 662, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=662)
      0.375 = coord(3/8)
    
    Abstract
    An exploratory study investigated the factors affecting digital literacy training offered by local community organizations, such as public libraries. Theory based on the educational assessment and information literacy instruction literatures, community informatics, and situated learning theory served as a lens of investigation. Case studies of two public libraries and five other local community organizations were carried out. Data collection comprised: one-on-one interviews with administrators, instructors, and community members who received training; analysis of training documents; observations of training sessions; and a survey administered to clients who participated in these training sessions. Data analysis yielded the generation of a holistic conceptual framework. The framework identifies salient factors of the learning environment and program components that affect learning outcomes arising from digital literacy training led by local community organizations. Theoretical propositions are made. Member checks confirmed the validity of the study's findings. Results are compared to prior theory. Recommendations for practice highlight the need to organize and train staff, acquire sustainable funding, reach marginalized populations, offer convenient training times to end-users, better market the training, share and adopt best practices, and better collect and analyze program performance measurement data. Implications for future research also are identified.
  4. Preminger, M.; Rype, I.; Ådland, M.K.; Massey, D.; Tallerås, K.: ¬The public library metadata landscape : the case of Norway 2017-2018 (2020) 0.04
    0.041197337 = product of:
      0.16478935 = sum of:
        0.08101445 = weight(_text_:libraries in 5802) [ClassicSimilarity], result of:
          0.08101445 = score(doc=5802,freq=12.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.6223308 = fieldWeight in 5802, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5802)
        0.0837749 = weight(_text_:case in 5802) [ClassicSimilarity], result of:
          0.0837749 = score(doc=5802,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.48085782 = fieldWeight in 5802, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5802)
      0.25 = coord(2/8)
    
    Abstract
    The aim of this paper is to gauge the cataloging practices within the public library sector seen from the catalog with Norway as a case, based on a sample of records from public libraries and cataloging agencies. Findings suggest that libraries make few changes to records they import from central agencies, and that larger libraries make more changes than smaller libraries. Findings also suggest that libraries catalog and modify records with their patrons in mind, and though the extent is not large, cataloging proficiency is still required in the public library domain, at least in larger libraries, in order to ensure correct and consistent metadata.
  5. Wu, Y.: Organization of complex topics in comprehensive classification schemes : case studies of disaster and security (2023) 0.04
    0.03779743 = product of:
      0.10079314 = sum of:
        0.023624292 = weight(_text_:libraries in 1117) [ClassicSimilarity], result of:
          0.023624292 = score(doc=1117,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.18147534 = fieldWeight in 1117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1117)
        0.042312715 = weight(_text_:case in 1117) [ClassicSimilarity], result of:
          0.042312715 = score(doc=1117,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 1117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1117)
        0.034856133 = weight(_text_:studies in 1117) [ClassicSimilarity], result of:
          0.034856133 = score(doc=1117,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 1117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1117)
      0.375 = coord(3/8)
    
    Abstract
    This research investigates how comprehensive classifications and home-grown classifications organize complex topics. Two comprehensive classifications and two home-grown taxonomies are used to examine two complex topics: disaster and security. The two comprehensive classifications are the Library of Congress Classification and the Classification Scheme for Chinese Libraries. The two home-grown taxonomies are AIRS 211 LA County Taxonomy of Human Services - Disaster Services, and the Human Security Taxonomy. It is found that a comprehensive classification may provide many subclasses of a complex topic, which are scattered in various classes. Occasionally the classification scheme may provide several small taxonomies that organize the terms of a subclass of the complex topic that are pulled from multiple classes. However, the comprehensive classification provides no organization of the major subclasses of the complex topic. The lack of organization of the major subclasses of the complex topic may prevent users from understanding the complex topic systematically, and so preventing them from selecting an appropriate classification term for the complex topic. Ideally a comprehensive classification should provide a high-level conceptual framework for the complex topic, or at least organize the major subclasses in a way that help users understand the complex topic systematically.
  6. Thomer, A.K.: Integrative data reuse at scientifically significant sites : case studies at Yellowstone National Park and the La Brea Tar Pits (2022) 0.03
    0.033415094 = product of:
      0.13366038 = sum of:
        0.07328778 = weight(_text_:case in 639) [ClassicSimilarity], result of:
          0.07328778 = score(doc=639,freq=6.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.420663 = fieldWeight in 639, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=639)
        0.06037259 = weight(_text_:studies in 639) [ClassicSimilarity], result of:
          0.06037259 = score(doc=639,freq=6.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.3818022 = fieldWeight in 639, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=639)
      0.25 = coord(2/8)
    
    Abstract
    Scientifically significant sites are the source of, and long-term repository for, considerable amounts of data-particularly in the natural sciences. However, the unique data practices of the researchers and resource managers at these sites have been relatively understudied. Through case studies of two scientifically significant sites (the hot springs at Yellowstone National Park and the fossil deposits at the La Brea Tar Pits), I developed rich descriptions of site-based research and data curation, and high-level data models of information classes needed to support integrative data reuse. Each framework treats the geospatial site and its changing natural characteristics as a distinct class of information; more commonly considered information classes such as observational and sampling data, and project metadata, are defined in relation to the site itself. This work contributes (a) case studies of the values and data needs for researchers and resource managers at scientifically significant sites, (b) an information framework to support integrative reuse at these sites, and (c) a discussion of data practices at scientifically significant sites.
  7. Lorenzo, L.; Mak, L.; Smeltekop, N.: FAST Headings in MODS : Michigan State University libraries digital repository case study (2023) 0.03
    0.032637153 = product of:
      0.13054861 = sum of:
        0.04677371 = weight(_text_:libraries in 1177) [ClassicSimilarity], result of:
          0.04677371 = score(doc=1177,freq=4.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.35930282 = fieldWeight in 1177, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1177)
        0.0837749 = weight(_text_:case in 1177) [ClassicSimilarity], result of:
          0.0837749 = score(doc=1177,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.48085782 = fieldWeight in 1177, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1177)
      0.25 = coord(2/8)
    
    Abstract
    The Michigan State University Libraries (MSUL) digital repository contains numerous collections of openly available material. Since 2016, the digital repository has been using Faceted Application of Subject Terminology (FAST) subject headings as its primary subject vocabulary in order to streamline faceting, display, and search. The MSUL FAST use case presents some challenges that are not addressed by existing MARC-focused FAST tools. This paper will outline the MSUL digital repository team's justification for including FAST headings in the digital repository as well as workflows for adding FAST headings to Metadata Object Description Schema (MODS) metadata, their maintenance, and utilization for discovery.
  8. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.03
    0.032435358 = product of:
      0.08649429 = sum of:
        0.047871374 = weight(_text_:case in 566) [ClassicSimilarity], result of:
          0.047871374 = score(doc=566,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.2747759 = fieldWeight in 566, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=566)
        0.027884906 = weight(_text_:studies in 566) [ClassicSimilarity], result of:
          0.027884906 = score(doc=566,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.17634688 = fieldWeight in 566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=566)
        0.010738007 = product of:
          0.021476014 = sum of:
            0.021476014 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
              0.021476014 = score(doc=566,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.15476047 = fieldWeight in 566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=566)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Knowledge Architectures reviews traditional approaches to managing information and explains why they need to adapt to support 21st-century information management and discovery. Exploring the rapidly changing environment in which information is being managed and accessed, the book considers how to use knowledge architectures, the basic structures and designs that underlie all of the parts of an effective information system, to best advantage. Drawing on 40 years of work with a variety of organizations, Bedford explains that failure to understand the structure behind any given system can be the difference between an effective solution and a significant and costly failure. Demonstrating that the information user environment has shifted significantly in the past 20 years, the book explains that end users now expect designs and behaviors that are much closer to the way they think, work, and act. Acknowledging how important it is that those responsible for developing an information or knowledge management system understand knowledge structures, the book goes beyond a traditional library science perspective and uses case studies to help translate the abstract and theoretical to the practical and concrete. Explaining the structures in a simple and intuitive way and providing examples that clearly illustrate the challenges faced by a range of different organizations, Knowledge Architectures is essential reading for those studying and working in library and information science, data science, systems development, database design, and search system architecture and engineering.
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together
  9. Ekstrand, M.D.; Wright, K.L.; Pera, M.S.: Enhancing classroom instruction with online news (2020) 0.03
    0.030576281 = product of:
      0.122305125 = sum of:
        0.042312715 = weight(_text_:case in 5844) [ClassicSimilarity], result of:
          0.042312715 = score(doc=5844,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 5844, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5844)
        0.07999241 = sum of:
          0.0531474 = weight(_text_:area in 5844) [ClassicSimilarity], result of:
            0.0531474 = score(doc=5844,freq=2.0), product of:
              0.1952553 = queryWeight, product of:
                4.927245 = idf(docFreq=870, maxDocs=44218)
                0.03962768 = queryNorm
              0.27219442 = fieldWeight in 5844, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.927245 = idf(docFreq=870, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5844)
          0.026845016 = weight(_text_:22 in 5844) [ClassicSimilarity], result of:
            0.026845016 = score(doc=5844,freq=2.0), product of:
              0.13876937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03962768 = queryNorm
              0.19345059 = fieldWeight in 5844, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5844)
      0.25 = coord(2/8)
    
    Abstract
    Purpose This paper investigates how school teachers look for informational texts for their classrooms. Access to current, varied and authentic informational texts improves learning outcomes for K-12 students, but many teachers lack resources to expand and update readings. The Web offers freely available resources, but finding suitable ones is time-consuming. This research lays the groundwork for building tools to ease that burden. Design/methodology/approach This paper reports qualitative findings from a study in two stages: (1) a set of semistructured interviews, based on the critical incident technique, eliciting teachers' information-seeking practices and challenges; and (2) observations of teachers using a prototype teaching-oriented news search tool under a think-aloud protocol. Findings Teachers articulated different objectives and ways of using readings in their classrooms, goals and self-reported practices varied by experience level. Teachers struggled to formulate queries that are likely to return readings on specific course topics, instead searching directly for abstract topics. Experience differences did not translate into observable differences in search skill or success in the lab study. Originality/value There is limited work on teachers' information-seeking practices, particularly on how teachers look for texts for classroom use. This paper describes how teachers look for information in this context, setting the stage for future development and research on how to support this use case. Understanding and supporting teachers looking for information is a rich area for future research, due to the complexity of the information need and the fact that teachers are not looking for information for themselves.
    Date
    20. 1.2015 18:30:22
  10. McElfresh, L.K.: Creator name standardization using faceted vocabularies in the BTAA geoportal : Michigan State University libraries digital repository case study (2023) 0.03
    0.029130917 = product of:
      0.11652367 = sum of:
        0.057285864 = weight(_text_:libraries in 1178) [ClassicSimilarity], result of:
          0.057285864 = score(doc=1178,freq=6.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.4400543 = fieldWeight in 1178, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1178)
        0.059237804 = weight(_text_:case in 1178) [ClassicSimilarity], result of:
          0.059237804 = score(doc=1178,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.34001783 = fieldWeight in 1178, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1178)
      0.25 = coord(2/8)
    
    Abstract
    Digital libraries incorporate metadata from varied sources, ranging from traditional catalog data to author-supplied descriptions. The Big Ten Academic Alliance (BTAA) Geoportal unites geospatial resources from the libraries of the BTAA, compounding the variability of metadata. The BTAA Geospatial Information Network's (BTAA GIN) Metadata Committee works to ensure completeness and consistency of metadata in the Geoportal, including a project to standardize the contents of the Creator field. The project comprises an OpenRefine data cleaning phase; evaluation of controlled vocabularies for semiautomated matching via OpenRefine reconciliation; and development and testing of a best practices guide for application of a controlled vocabulary.
  11. Tharani, K.: Just KOS! : enriching digital collections with hypertexts to enhance accessibility of non-western knowledge materials in libraries (2020) 0.03
    0.028541472 = product of:
      0.11416589 = sum of:
        0.06339063 = weight(_text_:libraries in 5896) [ClassicSimilarity], result of:
          0.06339063 = score(doc=5896,freq=10.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.4869494 = fieldWeight in 5896, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=5896)
        0.05077526 = weight(_text_:case in 5896) [ClassicSimilarity], result of:
          0.05077526 = score(doc=5896,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 5896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=5896)
      0.25 = coord(2/8)
    
    Abstract
    The knowledge organization systems (KOS) in use at libraries are social constructs that were conceived in the Euro-American context to organize and retrieve Western knowledge materials. As social constructs of the West, the effectiveness of library KOSs is limited when it comes to organization and retrieval of non-Western knowledge materials. How can librarians respond if asked to make non-Western knowledge materials as accessible as Western materials in their libraries? The accessibility of Western and non-Western knowledge materials in libraries need not be an either-or proposition. By way of a case study, a practical way forward is presented by which librarians can use their professional agency and existing digital technologies to exercise social justice. More specifically I demonstrate the design and development of a specialized KOS that enriches digital collections with hypertext features to enhance the accessibility of non-Western knowledge materials in libraries.
  12. Andrushchenko, M.; Sandberg, K.; Turunen, R.; Marjanen, J.; Hatavara, M.; Kurunmäki, J.; Nummenmaa, T.; Hyvärinen, M.; Teräs, K.; Peltonen, J.; Nummenmaa, J.: Using parsed and annotated corpora to analyze parliamentarians' talk in Finland (2022) 0.03
    0.027283307 = product of:
      0.10913323 = sum of:
        0.059839215 = weight(_text_:case in 471) [ClassicSimilarity], result of:
          0.059839215 = score(doc=471,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.34346986 = fieldWeight in 471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=471)
        0.049294014 = weight(_text_:studies in 471) [ClassicSimilarity], result of:
          0.049294014 = score(doc=471,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.3117402 = fieldWeight in 471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=471)
      0.25 = coord(2/8)
    
    Abstract
    We present a search system for grammatically analyzed corpora of Finnish parliamentary records and interviews with former parliamentarians, annotated with metadata of talk structure and involved parliamentarians, and discuss their use through carefully chosen digital humanities case studies. We first introduce the construction, contents, and principles of use of the corpora. Then we discuss the application of the search system and the corpora to study how politicians talk about power, how ideological terms are used in political speech, and how to identify narratives in the data. All case studies stem from questions in the humanities and the social sciences, but rely on the grammatically parsed corpora in both identifying and quantifying passages of interest. Finally, the paper discusses the role of natural language processing methods for questions in the (digital) humanities. It makes the claim that a digital humanities inquiry of parliamentary speech and interviews with politicians cannot only rely on computational humanities modeling, but needs to accommodate a range of perspectives starting with simple searches, quantitative exploration, and ending with modeling. Furthermore, the digital humanities need a more thorough discussion about how the utilization of tools from information science and technologies alter the research questions posed in the humanities.
  13. Peset, F.; Garzón-Farinós, F.; González, L.M.; García-Massó, X.; Ferrer-Sapena, A.; Toca-Herrera, J.L.; Sánchez-Pérez, E.A.: Survival analysis of author keywords : an application to the library and information sciences area (2020) 0.03
    0.026851181 = product of:
      0.107404724 = sum of:
        0.042312715 = weight(_text_:case in 5774) [ClassicSimilarity], result of:
          0.042312715 = score(doc=5774,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 5774, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5774)
        0.065092005 = product of:
          0.13018401 = sum of:
            0.13018401 = weight(_text_:area in 5774) [ClassicSimilarity], result of:
              0.13018401 = score(doc=5774,freq=12.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.66673744 = fieldWeight in 5774, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5774)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Our purpose is to adapt a statistical method for the analysis of discrete numerical series to the keywords appearing in scientific articles of a given area. As an example, we apply our methodological approach to the study of the keywords in the Library and Information Sciences (LIS) area. Our objective is to detect the new author keywords that appear in a fixed knowledge area in the period of 1 year in order to quantify the probabilities of survival for 10 years as a function of the impact of the journals where they appeared. Many of the new keywords appearing in the LIS field are ephemeral. Actually, more than half are never used again. In general, the terms most commonly used in the LIS area come from other areas. The average survival time of these keywords is approximately 3 years, being slightly higher in the case of words that were published in journals classified in the second quartile of the area. We believe that measuring the appearance and disappearance of terms will allow understanding some relevant aspects of the evolution of a discipline, providing in this way a new bibliometric approach.
  14. Darch, P.T.; Sands, A.E.; Borgman, C.L.; Golshan, M.S.: Library cultures of data curation : adventures in astronomy (2020) 0.03
    0.02620418 = product of:
      0.10481672 = sum of:
        0.062504 = weight(_text_:libraries in 36) [ClassicSimilarity], result of:
          0.062504 = score(doc=36,freq=14.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.48013863 = fieldWeight in 36, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=36)
        0.042312715 = weight(_text_:case in 36) [ClassicSimilarity], result of:
          0.042312715 = score(doc=36,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 36, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=36)
      0.25 = coord(2/8)
    
    Abstract
    University libraries are partnering with disciplinary data producers to provide long-term digital curation of research data sets. Managing data set producer expectations and guiding future development of library services requires understanding the decisions libraries make about curatorial activities, why they make these decisions, and the effects on future data reuse. We present a study, comprising interviews (n = 43) and ethnographic observation, of two university libraries who partnered with the Sloan Digital Sky Survey (SDSS) collaboration to curate a significant astronomy data set. The two libraries made different choices of the materials to curate and associated services, which resulted in different reuse possibilities. Each of the libraries offered partial solutions to the SDSS leaders' objectives. The libraries' approaches to curation diverged due to contextual factors, notably the extant infrastructure at their disposal (including technical infrastructure, staff expertise, values and internal culture, and organizational structure). The Data Transfer Process case offers lessons in understanding how libraries choose curation paths and how these choices influence possibilities for data reuse. Outcomes may not match data producers' initial expectations but may create opportunities for reusing data in unexpected and beneficial ways.
  15. Dattolo, A.; Corbatto, M.: Assisting researchers in bibliographic tasks : a new usable, real-time tool for analyzing bibliographies (2022) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 559) [ClassicSimilarity], result of:
          0.05077526 = score(doc=559,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 559, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=559)
        0.04182736 = weight(_text_:studies in 559) [ClassicSimilarity], result of:
          0.04182736 = score(doc=559,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 559, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=559)
      0.25 = coord(2/8)
    
    Abstract
    The amount of scientific papers is growing together with the development of science itself; but, although there is an unprecedented availability of large citation indexes, some daily activities of researchers remain time-consuming and poorly supported. In this paper, we present Visual Bibliographies (VisualBib), a real-time visual platform, designed using a zz-structure-based model for linking metadata and a narrative, visual approach for showing bibliographies. VisualBib represents a usable, advanced, and visual tool, which simplifies the management of bibliographies, supports a core set of bibliographic tasks, and helps researchers during complex analyses on scientific bibliographies. We present the variety of metadata formats and visualization methods, proposing two use case scenarios. The maturity of the system implementation allowed us two studies, for evaluating both the effectiveness of VisualBib in providing answers to specific data analysis tasks and to support experienced users during real-life uses. The results of the evaluation are positive and describe an effective and usable platform.
  16. Hagen, L.; Patel, M.; Luna-Reyes, L.: Human-supervised data science framework for city governments : a design science approach (2023) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 1016) [ClassicSimilarity], result of:
          0.05077526 = score(doc=1016,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 1016, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=1016)
        0.04182736 = weight(_text_:studies in 1016) [ClassicSimilarity], result of:
          0.04182736 = score(doc=1016,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 1016, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1016)
      0.25 = coord(2/8)
    
    Abstract
    The importance of involving humans in the data science process has been widely discussed in the literature. However, studies lack details on how to involve humans in the process. Using a design science approach, this paper proposes and evaluates a human-supervised data science framework in the context of local governments. Our findings suggest that the involvement of a stakeholder group, public managers in this case, in the process of data science project enhanced quality of data science outcomes. Public managers' detailed knowledge on both the data and context was beneficial for improving future data science infrastructure. In addition, the study suggests that local governments can harness the value of data-driven approaches to policy and decision making through focalized investments in improving data and data science infrastructure, which includes culture and processes necessary to incorporate data science and analytics into the decision-making process.
  17. Järvelin, K.; Vakkari, P.: LIS research across 50 years: content analysis of journal articles : offering an information-centric conception of memes (2022) 0.02
    0.022901682 = product of:
      0.09160673 = sum of:
        0.042312715 = weight(_text_:case in 949) [ClassicSimilarity], result of:
          0.042312715 = score(doc=949,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=949)
        0.049294014 = weight(_text_:studies in 949) [ClassicSimilarity], result of:
          0.049294014 = score(doc=949,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.3117402 = fieldWeight in 949, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=949)
      0.25 = coord(2/8)
    
    Abstract
    Purpose This paper analyses the research in Library and Information Science (LIS) and reports on (1) the status of LIS research in 2015 and (2) on the evolution of LIS research longitudinally from 1965 to 2015. Design/methodology/approach The study employs a quantitative intellectual content analysis of articles published in 30+ scholarly LIS journals, following the design by Tuomaala et al. (2014). In the content analysis, we classify articles along eight dimensions covering topical content and methodology. Findings The topical findings indicate that the earlier strong LIS emphasis on L&I services has declined notably, while scientific and professional communication has become the most popular topic. Information storage and retrieval has given up its earlier strong position towards the end of the years analyzed. Individuals are increasingly the units of observation. End-user's and developer's viewpoints have strengthened at the cost of intermediaries' viewpoint. LIS research is methodologically increasingly scattered since survey, scientometric methods, experiment, case studies and qualitative studies have all gained in popularity. Consequently, LIS may have become more versatile in the analysis of its research objects during the years analyzed. Originality/value Among quantitative intellectual content analyses of LIS research, the study is unique in its scope: length of analysis period (50 years), width (8 dimensions covering topical content and methodology) and depth (the annual batch of 30+ scholarly journals).
  18. Carter, D.; Acker, A.; Sholler, D.: Investigative approaches to researching information technology companies (2021) 0.02
    0.022760313 = product of:
      0.09104125 = sum of:
        0.05915282 = weight(_text_:studies in 256) [ClassicSimilarity], result of:
          0.05915282 = score(doc=256,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.37408823 = fieldWeight in 256, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=256)
        0.031888437 = product of:
          0.06377687 = sum of:
            0.06377687 = weight(_text_:area in 256) [ClassicSimilarity], result of:
              0.06377687 = score(doc=256,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.32663327 = fieldWeight in 256, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.046875 = fieldNorm(doc=256)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Recent events reveal the potential for information technologies to threaten democratic participation and destabilize knowledge institutions. These are core concerns for researchers working within the area of critical information studies-yet these companies have also demonstrated novel tactics for obscuring their operations, reducing the ability of scholars to speak about how harms are perpetuated or to link them to larger systems. While scholars' methods and ethical conventions have historically privileged the agency of research participants, the current landscape suggests the value of exploring methods that would reveal actions that are purposefully hidden. We propose investigation as a model for critical information studies and review the methods and epistemological conventions of investigative journalists as a provocative example, noting that their orientation toward those in power enables them to discuss societal harms in ways that academic researchers often cannot. We conclude by discussing key topics, such as process accountability and institutional norms, that should feature in discussions of how academic researchers might position investigation in relation to their own work.
  19. Gartner, R.: Metadata in the digital library : building an integrated strategy with XML (2021) 0.02
    0.022678455 = product of:
      0.060475882 = sum of:
        0.014174575 = weight(_text_:libraries in 732) [ClassicSimilarity], result of:
          0.014174575 = score(doc=732,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.1088852 = fieldWeight in 732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0234375 = fieldNorm(doc=732)
        0.02538763 = weight(_text_:case in 732) [ClassicSimilarity], result of:
          0.02538763 = score(doc=732,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.14572193 = fieldWeight in 732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0234375 = fieldNorm(doc=732)
        0.02091368 = weight(_text_:studies in 732) [ClassicSimilarity], result of:
          0.02091368 = score(doc=732,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.13226016 = fieldWeight in 732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0234375 = fieldNorm(doc=732)
      0.375 = coord(3/8)
    
    Abstract
    Metadata in the Digital Library is a complete guide to building a digital library metadata strategy from scratch, using established metadata standards bound together by the markup language XML. The book introduces the reader to the theory of metadata and shows how it can be applied in practice. It lays out the basic principles that should underlie any metadata strategy, including its relation to such fundamentals as the digital curation lifecycle, and demonstrates how they should be put into effect. It introduces the XML language and the key standards for each type of metadata, including Dublin Core and MODS for descriptive metadata and PREMIS for its administrative and preservation counterpart. Finally, the book shows how these can all be integrated using the packaging standard METS. Two case studies from the Warburg Institute in London show how the strategy can be implemented in a working environment. The strategy laid out in this book will ensure that a digital library's metadata will support all of its operations, be fully interoperable with others and enable its long-term preservation. It assumes no prior knowledge of metadata, XML or any of the standards that it covers. It provides both an introduction to best practices in digital library metadata and a manual for their practical implementation.
    Content
    Inhalt: 1 Introduction, Aims and Definitions -- 1.1 Origins -- 1.2 From information science to libraries -- 1.3 The central place of metadata -- 1.4 The book in outline -- 2 Metadata Basics -- 2.1 Introduction -- 2.2 Three types of metadata -- 2.2.1 Descriptive metadata -- 2.2.2 Administrative metadata -- 2.2.3 Structural metadata -- 2.3 The core components of metadata -- 2.3.1 Syntax -- 2.3.2 Semantics -- 2.3.3 Content rules -- 2.4 Metadata standards -- 2.5 Conclusion -- 3 Planning a Metadata Strategy: Basic Principles -- 3.1 Introduction -- 3.2 Principle 1: Support all stages of the digital curation lifecycle -- 3.3 Principle 2: Support the long-term preservation of the digital object -- 3.4 Principle 3: Ensure interoperability -- 3.5 Principle 4: Control metadata content wherever possible -- 3.6 Principle 5: Ensure software independence -- 3.7 Principle 6: Impose a logical system of identifiers -- 3.8 Principle 7: Use standards whenever possible -- 3.9 Principle 8: Ensure the integrity of the metadata itself -- 3.10 Summary: the basic principles of a metadata strategy -- 4 Planning a Metadata Strategy: Applying the Basic Principles -- 4.1 Introduction -- 4.2 Initial steps: standards as a foundation -- 4.2.1 'Off-the shelf' standards -- 4.2.2 Mapping out an architecture and serialising it into a standard -- 4.2.3 Devising a local metadata scheme -- 4.2.4 How standards support the basic principles -- 4.3 Identifiers: everything in its place -- 5 XML: The Syntactical Foundation of Metadata -- 5.1 Introduction -- 5.2 What XML looks like -- 5.3 XML schemas -- 5.4 Namespaces -- 5.5 Creating and editing XML -- 5.6 Transforming XML -- 5.7 Why use XML? -- 6 METS: The Metadata Package -- 6.1 Introduction -- 6.2 Why use METS?.
  20. Yip, J.C.; Lee, K.J.; Lee, J.H.: Design partnerships for participatory librarianship : a conceptual model for understanding librarians co designing with digital youth (2020) 0.02
    0.022390325 = product of:
      0.0895613 = sum of:
        0.047248583 = weight(_text_:libraries in 5967) [ClassicSimilarity], result of:
          0.047248583 = score(doc=5967,freq=8.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.36295068 = fieldWeight in 5967, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5967)
        0.042312715 = weight(_text_:case in 5967) [ClassicSimilarity], result of:
          0.042312715 = score(doc=5967,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 5967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5967)
      0.25 = coord(2/8)
    
    Abstract
    Libraries play a central role for youth and digital learning. As libraries transition to learning spaces, youth librarians can engage in aspects of democratic design that empowers youth. Participatory design (PD) is a user-centered design method that can support librarians in the democratic development of digital learning spaces. However, while PD has been used in libraries, we have little knowledge of how youth librarians can act as codesign partners. We need a conceptual model to understand the role of youth librarians in codesign, and how their experiences are integrated into youth design partnerships. To generate this model, we examine a case study of the evolutionary process of a single librarian and the development of a library system's learning activities through PD. Using the idea of equal design partnerships, we analyzed video recordings and stakeholder interviews on how children (ages 7-11) worked together with a librarian to develop new digital learning activities. Our discussion focuses on the development of a participatory librarian design conceptual model that situates librarians as design partners with youth. The article concludes with recommendations for integrating PD methods into libraries to create digital learning spaces and suggestions for moving forward with this design perspective.

Languages

  • e 305
  • d 35
  • pt 2
  • sp 1
  • More… Less…

Types

  • a 325
  • el 41
  • m 8
  • p 7
  • s 2
  • x 2
  • More… Less…