Search (2002 results, page 2 of 101)

  • × year_i:[2010 TO 2020}
  1. Junger, U.: Can indexing be automated? : the example of the Deutsche Nationalbibliothek (2012) 0.09
    0.09123495 = product of:
      0.121646605 = sum of:
        0.060314562 = weight(_text_:digital in 1717) [ClassicSimilarity], result of:
          0.060314562 = score(doc=1717,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30507088 = fieldWeight in 1717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1717)
        0.026799891 = weight(_text_:library in 1717) [ClassicSimilarity], result of:
          0.026799891 = score(doc=1717,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 1717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1717)
        0.034532152 = product of:
          0.069064304 = sum of:
            0.069064304 = weight(_text_:project in 1717) [ClassicSimilarity], result of:
              0.069064304 = score(doc=1717,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.32644984 = fieldWeight in 1717, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1717)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The German subject headings authority file (Schlagwortnormdatei/SWD) provides a broad controlled vocabulary for indexing documents of all subjects. Traditionally used for intellectual subject cataloguing primarily of books the Deutsche Nationalbibliothek (DNB, German National Library) has been working on developping and implementing procedures for automated assignment of subject headings for online publications. This project, its results and problems are sketched in the paper.
    Content
    Beitrag für die Tagung: Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn. Vgl.: http://http://www.nlib.ee/index.php?id=17763.
  2. Junger, U.: Can indexing be automated? : the example of the Deutsche Nationalbibliothek (2014) 0.09
    0.09123495 = product of:
      0.121646605 = sum of:
        0.060314562 = weight(_text_:digital in 1969) [ClassicSimilarity], result of:
          0.060314562 = score(doc=1969,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30507088 = fieldWeight in 1969, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1969)
        0.026799891 = weight(_text_:library in 1969) [ClassicSimilarity], result of:
          0.026799891 = score(doc=1969,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 1969, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1969)
        0.034532152 = product of:
          0.069064304 = sum of:
            0.069064304 = weight(_text_:project in 1969) [ClassicSimilarity], result of:
              0.069064304 = score(doc=1969,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.32644984 = fieldWeight in 1969, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1969)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The German Integrated Authority File (Gemeinsame Normdatei, GND), provides a broad controlled vocabulary for indexing documents on all subjects. Traditionally used for intellectual subject cataloging primarily for books, the Deutsche Nationalbibliothek (DNB, German National Library) has been working on developing and implementing procedures for automated assignment of subject headings for online publications. This project, its results, and problems are outlined in this article.
    Footnote
    Contribution in a special issue "Beyond libraries: Subject metadata in the digital environment and Semantic Web" - Enthält Beiträge der gleichnamigen IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn.
  3. Mardis, M.A.; Hoffman, E.S.; McMartin, F.P.: Toward broader impacts : making sense of NSF's merit review criteria in the context of the National Science Digital Library (2012) 0.09
    0.089061745 = product of:
      0.11874899 = sum of:
        0.060926907 = weight(_text_:digital in 384) [ClassicSimilarity], result of:
          0.060926907 = score(doc=384,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.3081681 = fieldWeight in 384, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=384)
        0.033156272 = weight(_text_:library in 384) [ClassicSimilarity], result of:
          0.033156272 = score(doc=384,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.25158736 = fieldWeight in 384, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=384)
        0.024665821 = product of:
          0.049331643 = sum of:
            0.049331643 = weight(_text_:project in 384) [ClassicSimilarity], result of:
              0.049331643 = score(doc=384,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.23317845 = fieldWeight in 384, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=384)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Scholars in library and information science are under increasing pressure to seek external funding for research. The National Science Foundation (NSF), which is often the source of this funding, considers proposed projects based on the criteria of "Intellectual Merit" and "Broader Impacts." However, these merit review criteria have been criticized as being insufficiently specific and not appropriate for all types of scientific research. In an effort to examine the extent to which funded projects represented Broader Impacts, the researchers performed a content analysis of the abstracts from projects in the National Science Digital Library, an NSF project that crossed many disciplines and applications, but is of particular relevance to information scientists. When the results of these analyses are placed in the context of the controversy surrounding the Broader Impacts merit review criterion, it is clear that this criterion is interpreted broadly and that even successful proposals often include aspirational or incomplete claims of impact. Because current proposed revisions to the merit review criteria that include emphases on demonstrable innovation and economic benefit will likely only complicate proposers' abilities to describe their projects' potentials, researchers may benefit from a greater understanding of Broader Impacts and how they can be clearly expressed to reviewers.
  4. Pattuelli, M.C.: Modeling a domain ontology for cultural heritage resources : a user-centered approach (2011) 0.09
    0.08900161 = product of:
      0.11866882 = sum of:
        0.07461992 = weight(_text_:digital in 4194) [ClassicSimilarity], result of:
          0.07461992 = score(doc=4194,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.37742734 = fieldWeight in 4194, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4194)
        0.027071979 = weight(_text_:library in 4194) [ClassicSimilarity], result of:
          0.027071979 = score(doc=4194,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 4194, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4194)
        0.016976917 = product of:
          0.033953834 = sum of:
            0.033953834 = weight(_text_:22 in 4194) [ClassicSimilarity], result of:
              0.033953834 = score(doc=4194,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.19345059 = fieldWeight in 4194, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4194)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The use of primary source materials is recognized as key to supporting history and social studies education. The extensive digitization of library, museum, and other cultural heritage collections represents an important teaching resource. Yet, searching and selecting digital primary sources appropriate for classroom use can be difficult and time-consuming. This study investigates the design requirements and the potential usefulness of a domain-specific ontology to facilitate access to, and use of, a collection of digital primary source materials developed by the Library of the University of North Carolina at Chapel Hill. During a three-phase study, an ontology model was designed and evaluated with the involvement of social studies teachers. The findings revealed that the design of the ontology was appropriate to support the information needs of the teachers and was perceived as a potentially useful tool to enhance collection access. The primary contribution of this study is the introduction of an approach to ontology development that is user-centered and designed to facilitate access to digital cultural heritage materials. Such an approach should be considered on a case-by-case basis in relation to the size of the ontology being built, the nature of the knowledge domain, and the type of end users targeted.
    Date
    22. 1.2011 14:11:34
  5. Renugadevi, S.; Geetha, T.V.; Gayathiri, R.L.; Prathyusha, S.; Kaviya, T.: Collaborative search using an implicitly formed academic network (2014) 0.09
    0.08730605 = product of:
      0.116408065 = sum of:
        0.034465462 = weight(_text_:digital in 1628) [ClassicSimilarity], result of:
          0.034465462 = score(doc=1628,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.17432621 = fieldWeight in 1628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=1628)
        0.015314223 = weight(_text_:library in 1628) [ClassicSimilarity], result of:
          0.015314223 = score(doc=1628,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.11620321 = fieldWeight in 1628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=1628)
        0.06662838 = sum of:
          0.039465316 = weight(_text_:project in 1628) [ClassicSimilarity], result of:
            0.039465316 = score(doc=1628,freq=2.0), product of:
              0.21156175 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.050121464 = queryNorm
              0.18654276 = fieldWeight in 1628, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.03125 = fieldNorm(doc=1628)
          0.027163066 = weight(_text_:22 in 1628) [ClassicSimilarity], result of:
            0.027163066 = score(doc=1628,freq=2.0), product of:
              0.17551683 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050121464 = queryNorm
              0.15476047 = fieldWeight in 1628, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1628)
      0.75 = coord(3/4)
    
    Abstract
    Purpose - The purpose of this paper is to propose the Collaborative Search System that attempts to achieve collaboration by implicitly identifying and reflecting search behaviour of collaborators in an academic network that is automatically and dynamically formed. By using the constructed Collaborative Hit Matrix (CHM), results are obtained that are based on the search behaviour and earned preferences of specialist communities of researchers, which are relevant to the user's need and reduce the time spent on bad links. Design/methodology/approach - By using the Digital Bibliography Library Project (DBLP), the research communities are formed implicitly and dynamically based on the users' research presence in the search environment and in the publication scenario, which is also used to assign users' roles and establish links between the users. The CHM, to store the hit count and hit list of page results for queries, is also constructed and updated after every search session to enhance the collaborative search among the researchers. Findings - The implicit researchers community formation, the assignment and dynamic updating of roles of the researchers based on research, search presence and search behaviour on the web as well as the usage of these roles during Collaborative Web Search have highly improved the relevancy of results. The CHM that holds the collaborative responses provided by the researchers on the search query results to support searching distinguishes this system from others. Thus the proposed system considerably improves the relevancy and reduces the time spent on bad links, thus improving recall and precision. Originality/value - The research findings illustrate the better performance of the system, by connecting researchers working in the same field and allowing them to help each other in a web search environment.
    Date
    20. 1.2015 18:30:22
  6. Cushing, A.L.: "It's stuff that speaks to me" : exploring the characteristics of digital possessions (2013) 0.09
    0.08723827 = product of:
      0.17447653 = sum of:
        0.15533376 = weight(_text_:digital in 1013) [ClassicSimilarity], result of:
          0.15533376 = score(doc=1013,freq=26.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.7856777 = fieldWeight in 1013, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1013)
        0.01914278 = weight(_text_:library in 1013) [ClassicSimilarity], result of:
          0.01914278 = score(doc=1013,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.14525402 = fieldWeight in 1013, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1013)
      0.5 = coord(2/4)
    
    Abstract
    Digital possessions are digital items that individuals distinguish from other digital items by specific qualities that individuals perceive the digital items to possess. Twenty-three participants were interviewed about their definitions of and relationships with digital possessions to identify the most salient characteristics of digital possessions and to inform preservation. Findings indicate that digital possessions are characterized as (a) providing evidence of the individual, (b) representing the individual's identity, (c) being recognized as having value, and (d) exhibiting a sense of bounded control. Furthermore, archival concepts of primary, secondary, and intrinsic values provide the frame for the defining characteristics. Although several findings from this study are consistent with former studies of material possessions and digital possessions, this study expands research in the area using the concept of digital possessions to inform preservation and by applying archival principles of value. Understanding the nature of the individual and digital item relationship provides potential to explore new areas of reference and outreach services in libraries and archives. As the nature of archival and library reference services evolves, some scholars have predicted that archives and libraries will play a part in helping individuals manage their personal collections An exploration of individuals' relationships with their digital possessions can serve as a starting point at which scholars can explore the potential of personal information management consulting as a new area of reference and information services, specifically for the preservation of personal digital material.
  7. Thompson, S.; Reilly, M.: ¬"A picture is worth a thousand words" : reverse image lookup and digital library assessment (2017) 0.09
    0.08711445 = product of:
      0.1742289 = sum of:
        0.120629124 = weight(_text_:digital in 3795) [ClassicSimilarity], result of:
          0.120629124 = score(doc=3795,freq=8.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.61014175 = fieldWeight in 3795, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3795)
        0.053599782 = weight(_text_:library in 3795) [ClassicSimilarity], result of:
          0.053599782 = score(doc=3795,freq=8.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.40671125 = fieldWeight in 3795, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3795)
      0.5 = coord(2/4)
    
    Abstract
    This brief communication builds on the application of content-based image retrieval (CBIR) and reverse image lookup (RIL), a graduated form of CBIR, as assessment tools for digital library image reuse. It combines literature on the definition, history, usefulness, and limitations of RIL and includes a brief analysis of the 4 published digital library image reuse assessment case studies. In its conclusion, the communication paper proposes that RIL offers benefits for digital library managers in the assessment of their collections.
  8. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.08
    0.0846572 = product of:
      0.112876266 = sum of:
        0.043081827 = weight(_text_:digital in 3373) [ClassicSimilarity], result of:
          0.043081827 = score(doc=3373,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.21790776 = fieldWeight in 3373, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.027071979 = weight(_text_:library in 3373) [ClassicSimilarity], result of:
          0.027071979 = score(doc=3373,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 3373, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.04272246 = product of:
          0.08544492 = sum of:
            0.08544492 = weight(_text_:project in 3373) [ClassicSimilarity], result of:
              0.08544492 = score(doc=3373,freq=6.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.40387696 = fieldWeight in 3373, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
  9. Reitsma, R.; Marshall, B.; Chart, T.: Can intermediary-based science standards crosswalking work? : some evidence from mining the standard alignment tool (SAT) (2012) 0.08
    0.084498525 = product of:
      0.1126647 = sum of:
        0.060926907 = weight(_text_:digital in 381) [ClassicSimilarity], result of:
          0.060926907 = score(doc=381,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.3081681 = fieldWeight in 381, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=381)
        0.027071979 = weight(_text_:library in 381) [ClassicSimilarity], result of:
          0.027071979 = score(doc=381,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 381, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=381)
        0.024665821 = product of:
          0.049331643 = sum of:
            0.049331643 = weight(_text_:project in 381) [ClassicSimilarity], result of:
              0.049331643 = score(doc=381,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.23317845 = fieldWeight in 381, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=381)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    We explore the feasibility of intermediary-based crosswalking and alignment of K-12 science education standards. With the increasing availability of K-12 science, technology, engineering, and mathematics (STEM) digital library content, alignment of that content with educational standards is a significant and continuous challenge. Whereas direct, one-to-one alignment of standards is preferable but currently unsustainable in its resource demands, less resource-intensive intermediary-based alignment offers an interesting alternative. But will it work? We present the results from an experiment in which the machine-based Standard Alignment Tool (SAT)-incorporated in the National Science Digital Library (NSDL)-was used to collect over half a million direct alignments between standards from different standard-authoring bodies. These were then used to compute intermediary-based alignments derived from the well-known AAAS Project 2061 Benchmarks and NSES standards. The results show strong variation among authoring bodies in their success at crosswalking, with the best results for those who modeled their standards on the intermediaries. The results furthermore show a strong inverse relationship between recall and precision when both intermediates were involved in the crosswalking.
  10. Zhang, J.: Archival context, digital content, and the ethics of digital archival representation : the ethics of identification in digital library metadata (2012) 0.08
    0.08419131 = product of:
      0.16838261 = sum of:
        0.14923984 = weight(_text_:digital in 419) [ClassicSimilarity], result of:
          0.14923984 = score(doc=419,freq=24.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.7548547 = fieldWeight in 419, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=419)
        0.01914278 = weight(_text_:library in 419) [ClassicSimilarity], result of:
          0.01914278 = score(doc=419,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.14525402 = fieldWeight in 419, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=419)
      0.5 = coord(2/4)
    
    Abstract
    The findings of a recent study on digital archival representation raise some ethical concerns about how digital archival materials are organized, described, and made available for use on the Web. Archivists have a fundamental obligation to preserve and protect the authenticity and integrity of records in their holdings and, at the same time, have the responsibility to promote the use of records as a fundamental purpose of the keeping of archives (SAA 2005 Code of Ethics for Archivists V & VI). Is it an ethical practice that digital content in digital archives is deeply embedded in its contextual structure and generally underrepresented in digital archival systems? Similarly, is it ethical for archivists to detach digital items from their archival context in order to make them more "digital friendly" and more accessible to meet needs of some users? Do archivists have an obligation to bring the two representation systems together so that the context and content of digital archives can be better represented and archival materials "can be located and used by anyone, for any purpose, while still remaining authentic evidence of the work and life of the creator"? (Millar 2010, 157) This paper discusses the findings of the study and their ethical implications relating to digital archival description and representation.
  11. Tsakonas, G.; Papatheodorou, C.: ¬An ontological representation of the digital library evaluation domain (2011) 0.08
    0.08348307 = product of:
      0.16696614 = sum of:
        0.11560067 = weight(_text_:digital in 4632) [ClassicSimilarity], result of:
          0.11560067 = score(doc=4632,freq=10.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.58470786 = fieldWeight in 4632, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=4632)
        0.05136547 = weight(_text_:library in 4632) [ClassicSimilarity], result of:
          0.05136547 = score(doc=4632,freq=10.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.38975742 = fieldWeight in 4632, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=4632)
      0.5 = coord(2/4)
    
    Abstract
    Digital library evaluation is a complex field, as complex as the phenomena it studies. The interest of the digital library society still remains vibrant after all these years of solidification, as these systems have entered real-life applications. However the community has still to reach a consensus on what evaluation is and how it can effectively be planned. In the present article, an ontology of the digital library evaluation domain, named DiLEO, is proposed, aiming to reveal explicitly the main concepts of this domain and their correlations, and it tries to combine creatively and integrate several scientific paradigms, approaches, methods, techniques, and tools. This article demonstrates the added value features of the ontology, which are the support of comparative studies between different evaluation initiatives and the assistance in effective digital library evaluation planning.
  12. Gore, E.; Bitta, M.D.; Cohen, D.: ¬The Digital Public Library of America and the National Digital Platform (2017) 0.08
    0.083210856 = product of:
      0.16642171 = sum of:
        0.1266342 = weight(_text_:digital in 3655) [ClassicSimilarity], result of:
          0.1266342 = score(doc=3655,freq=12.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.6405154 = fieldWeight in 3655, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=3655)
        0.039787523 = weight(_text_:library in 3655) [ClassicSimilarity], result of:
          0.039787523 = score(doc=3655,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.30190483 = fieldWeight in 3655, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=3655)
      0.5 = coord(2/4)
    
    Abstract
    The Digital Public Library of America brings together the riches of America's libraries, archives, and museums, and makes them freely available to the world. In order to do this, DPLA has had to build elements of the national digital platform to connect to those institutions and to serve their digitized materials to audiences. In this article, we detail the construction of two critical elements of our work: the decentralized national network of "hubs," which operate in states across the country; and a version of the Hydra repository software that is tailored to the needs of our community. This technology and the organizations that make use of it serve as the foundation of the future of DPLA and other projects that seek to take advantage of the national digital platform.
    Object
    Digital Public Library of America
  13. Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus (2012) 0.08
    0.08245729 = product of:
      0.16491458 = sum of:
        0.13678056 = weight(_text_:digital in 468) [ClassicSimilarity], result of:
          0.13678056 = score(doc=468,freq=56.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.6918357 = fieldWeight in 468, product of:
              7.483315 = tf(freq=56.0), with freq of:
                56.0 = termFreq=56.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0234375 = fieldNorm(doc=468)
        0.028134026 = weight(_text_:library in 468) [ClassicSimilarity], result of:
          0.028134026 = score(doc=468,freq=12.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.21347894 = fieldWeight in 468, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0234375 = fieldNorm(doc=468)
      0.5 = coord(2/4)
    
    Abstract
    Archival Information Systems (AIS) are becoming increasingly important. For decades, the amount of content created digitally is growing and its complete life cycle nowadays tends to remain digital. A selection of this content is expected to be of value for the future and can thus be considered being part of our cultural heritage. However, digital content poses many challenges for long-term or indefinite preservation, e.g. digital publications become increasingly complex by the embedding of different kinds of multimedia, data in arbitrary formats and software. As soon as these digital publications become obsolete, but are still deemed to be of value in the future, they have to be transferred smoothly into appropriate AIS where they need to be kept accessible even through changing technologies. The successful previous SDA workshop in 2011 showed: Both, the library and the archiving community have made valuable contributions to the management of huge amounts of knowledge and data. However, both are approaching this topic from different views which shall be brought together to cross-fertilize each other. There are promising combinations of pertinence and provenance models since those are traditionally the prevailing knowledge organization principles of the library and archiving community, respectively. Another scientific discipline providing promising technical solutions for knowledge representation and knowledge management is semantic technologies, which is supported by appropriate W3C recommendations and a large user community. At the forefront of making the semantic web a mature and applicable reality is the linked data initiative, which already has started to be adopted by the library community. It can be expected that using semantic (web) technologies in general and linked data in particular can mature the area of digital archiving as well as technologically tighten the natural bond between digital libraries and digital archives. Semantic representations of contextual knowledge about cultural heritage objects will enhance organization and access of data and knowledge. In order to achieve a comprehensive investigation, the information seeking and document triage behaviors of users (an area also classified under the field of Human Computer Interaction) will also be included in the research.
    One of the major challenges of digital archiving is how to deal with changing technologies and changing user communities. On the one hand software, hardware and (multimedia) data formats that become obsolete and are not supported anymore still need to be kept accessible. On the other hand changing user communities necessitate technical means to formalize, detect and measure knowledge evolution. Furthermore, digital archival records are usually not deleted from the AIS and therefore, the amount of digitally archived (multimedia) content can be expected to grow rapidly. Therefore, efficient storage management solutions geared to the fact that cultural heritage is not as frequently accessed like up-to-date content residing in a digital library are required. Software and hardware needs to be tightly connected based on sophisticated knowledge representation and management models in order to face that challenge. In line with the above, contributions to the workshop should focus on, but are not limited to:
    Semantic search & semantic information retrieval in digital archives and digital libraries Semantic multimedia archives Ontologies & linked data for digital archives and digital libraries Ontologies & linked data for multimedia archives Implementations and evaluations of semantic digital archives Visualization and exploration of digital content User interfaces for semantic digital libraries User interfaces for intelligent multimedia information retrieval User studies focusing on end-user needs and information seeking behavior of end-users Theoretical and practical archiving frameworks using Semantic (Web) technologies Logical theories for digital archives Semantic (Web) services implementing the OAIS standard Semantic or logical provenance models for digital archives or digital libraries Information integration/semantic ingest (e.g. from digital libraries) Trust for ingest and data security/integrity check for long-term storage of archival records Semantic extensions of emulation/virtualization methodologies tailored for digital archives Semantic long-term storage and hardware organization tailored for AIS Migration strategies based on Semantic (Web) technologies Knowledge evolution We expect new insights and results for sustainable technical solutions for digital archiving using knowledge management techniques based on semantic technologies. The workshop emphasizes interdisciplinarity and aims at an audience consisting of scientists and scholars from the digital library, digital archiving, multimedia technology and semantic web community, the information and library sciences, as well as, from the social sciences and (digital) humanities, in particular people working on the mentioned topics. We encourage end-users, practitioners and policy-makers from cultural heritage institutions to participate as well.
  14. Biagetti, M.T.: Digital libraries and semantic searching (2014) 0.08
    0.07931756 = product of:
      0.15863512 = sum of:
        0.13486744 = weight(_text_:digital in 1463) [ClassicSimilarity], result of:
          0.13486744 = score(doc=1463,freq=10.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.6821592 = fieldWeight in 1463, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1463)
        0.023767682 = product of:
          0.047535364 = sum of:
            0.047535364 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.047535364 = score(doc=1463,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.2708308 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The aim of this paper is to highlight the possibility of improving the search functions of documents in Digital Libraries. The latest generation Digital Libraries, and Semantic Digital Libraries, which adopt Semantic Web and Social Networking advanced tools, such as bibliographical ontologies and recommendation systems, are taken into consideration. The proposal is to consider in addition the paradigm founded on the conceptual analysis of documents, to improve semantic search functionalities in Digital Libraries.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  15. Copeland, A.J.: Analysis of public library users' digital preservation practices (2011) 0.08
    0.07926495 = product of:
      0.1585299 = sum of:
        0.120629124 = weight(_text_:digital in 4473) [ClassicSimilarity], result of:
          0.120629124 = score(doc=4473,freq=8.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.61014175 = fieldWeight in 4473, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4473)
        0.03790077 = weight(_text_:library in 4473) [ClassicSimilarity], result of:
          0.03790077 = score(doc=4473,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.28758827 = fieldWeight in 4473, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4473)
      0.5 = coord(2/4)
    
    Abstract
    This research investigated preservation practices of personal digital information by public library users. This qualitative study used semistructured interviews and two visual representation techniques, information source horizons and matrices, for data collection. The constant comparison method and descriptive statistics were used to analyze the data. A model emerged which describes the effects of social, cognitive, and affective influences on personal preservation decisions as well as the effects of fading cognitive associations and technological advances, combined with information escalation over time. Because the preservation of personal digital information involves personal, social, and technological interactions, the integration of these factors is necessary for a viable solution to the digital preservation problem.
  16. Ullah, A.; Khusro, S.; Ullah, I.: Bibliographic classification in the digital age : current trends & future directions (2017) 0.08
    0.07926495 = product of:
      0.1585299 = sum of:
        0.120629124 = weight(_text_:digital in 5717) [ClassicSimilarity], result of:
          0.120629124 = score(doc=5717,freq=8.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.61014175 = fieldWeight in 5717, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5717)
        0.03790077 = weight(_text_:library in 5717) [ClassicSimilarity], result of:
          0.03790077 = score(doc=5717,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.28758827 = fieldWeight in 5717, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5717)
      0.5 = coord(2/4)
    
    Abstract
    Bibliographic classification is among the core activities of Library & Information Science that brings order and proper management to the holdings of a library. Compared to printed media, digital collections present numerous challenges regarding their preservation, curation, organization and resource discovery & access. Therefore, true native perspective is needed to be adopted for bibliographic classification in digital environments. In this research article, we have investigated and reported different approaches to bibliographic classification of digital collections. The article also contributes two evaluation frameworks that evaluate the existing classification schemes and systems. The article presents a bird's-eye view for researchers in reaching a generalized and holistic approach towards bibliographic classification research, where new research avenues have been identified.
  17. Mustafa El Hadi, W.; Favier, L.: Bridging the gaps between knowledge organization and digital humanities (2014) 0.08
    0.07857643 = product of:
      0.15715286 = sum of:
        0.13678056 = weight(_text_:digital in 1462) [ClassicSimilarity], result of:
          0.13678056 = score(doc=1462,freq=14.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.6918357 = fieldWeight in 1462, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=1462)
        0.0203723 = product of:
          0.0407446 = sum of:
            0.0407446 = weight(_text_:22 in 1462) [ClassicSimilarity], result of:
              0.0407446 = score(doc=1462,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.23214069 = fieldWeight in 1462, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1462)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The common core activity for digital humanities and memory institutions such as libraries, archives, and museums is digitizing the representations of cultural and historical documents, images, and artifacts. Most of these resources are delivered online to users. The emergence of Digital Libraries in the early 1990s was a turning point and a critical component of the world-wide shift to networked information. This article focuses on the fundamental role of Knowledge Organization Systems (KOS) for the Humanities with a special attention to libraries as one of the actors of Digital Humanities. The interplay between Digital Libraries and Digital Humanities will be highlighted. Not only will they provide access to a host of source materials that humanists need in order to do their work, but Digital Libraries will also enable new forms of research that were difficult or impossible to undertake before.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  18. Liu, Y.; Li, W.; Huang, Z.; Fang, Q.: ¬A fast method based on multiple clustering for name disambiguation in bibliographic citations (2015) 0.08
    0.078551635 = product of:
      0.10473551 = sum of:
        0.060926907 = weight(_text_:digital in 1672) [ClassicSimilarity], result of:
          0.060926907 = score(doc=1672,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.3081681 = fieldWeight in 1672, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1672)
        0.01914278 = weight(_text_:library in 1672) [ClassicSimilarity], result of:
          0.01914278 = score(doc=1672,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.14525402 = fieldWeight in 1672, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1672)
        0.024665821 = product of:
          0.049331643 = sum of:
            0.049331643 = weight(_text_:project in 1672) [ClassicSimilarity], result of:
              0.049331643 = score(doc=1672,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.23317845 = fieldWeight in 1672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1672)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Name ambiguity in the context of bibliographic citation affects the quality of services in digital libraries. Previous methods are not widely applied in practice because of their high computational complexity and their strong dependency on excessive attributes, such as institutional affiliation, research area, address, etc., which are difficult to obtain in practice. To solve this problem, we propose a novel coarse-to-fine framework for name disambiguation which sequentially employs 3 common and easily accessible attributes (i.e., coauthor name, article title, and publication venue). Our proposed framework is based on multiple clustering and consists of 3 steps: (a) clustering articles by coauthorship and obtaining rough clusters, that is fragments; (b) clustering fragments obtained in step 1 by title information and getting bigger fragments; (c) and clustering fragments obtained in step 2 by the latent relations among venues. Experimental results on a Digital Bibliography and Library Project (DBLP) data set show that our method outperforms the existing state-of-the-art methods by 2.4% to 22.7% on the average pairwise F1 score and is 10 to 100 times faster in terms of execution time.
  19. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.08
    0.07761024 = product of:
      0.10348032 = sum of:
        0.051698197 = weight(_text_:digital in 1967) [ClassicSimilarity], result of:
          0.051698197 = score(doc=1967,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.26148933 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.022971334 = weight(_text_:library in 1967) [ClassicSimilarity], result of:
          0.022971334 = score(doc=1967,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.028810784 = product of:
          0.05762157 = sum of:
            0.05762157 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.05762157 = score(doc=1967,freq=4.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  20. Gradmann, S.: Knowledge = Information in context : on the importance of semantic contextualisation in Europeana (2010) 0.08
    0.07718726 = product of:
      0.10291635 = sum of:
        0.059695937 = weight(_text_:digital in 3475) [ClassicSimilarity], result of:
          0.059695937 = score(doc=3475,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30194187 = fieldWeight in 3475, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=3475)
        0.015314223 = weight(_text_:library in 3475) [ClassicSimilarity], result of:
          0.015314223 = score(doc=3475,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.11620321 = fieldWeight in 3475, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=3475)
        0.027906192 = product of:
          0.055812385 = sum of:
            0.055812385 = weight(_text_:project in 3475) [ClassicSimilarity], result of:
              0.055812385 = score(doc=3475,freq=4.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.26381132 = fieldWeight in 3475, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3475)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    "Europeana.eu is about ideas and inspiration. It links you to 6 million digital items." This is the opening statement taken from the Europeana WWW-site (http://www.europeana.eu/portal/aboutus.html), and it clearly is concerned with the mission of Europeana - without, however, being over-explicit as to the precise nature of that mission. Europeana's current logo, too, has a programmatic aspect: the slogan "Think Culture" clearly again is related to Europeana's mission and at same time seems somewhat closer to the point: 'thinking' culture evokes notions like conceptualisation, reasoning, semantics and the like. Still, all this remains fragmentary and insufficient to actually clarify the functional scope and mission of Europeana. In fact, the author of the present contribution is convinced that Europeana has too often been described in terms of sheer quantity, as a high volume aggregation of digital representations of cultural heritage objects without sufficiently stressing the functional aspects of this endeavour. This conviction motivates the present contribution on some of the essential functional aspects of Europeana making clear that such a contribution - even if its author is deeply involved in building Europeana - should not be read as an official statement of the project or of the European Commission (which it is not!) - but as the personal statement from an information science perspective! From this perspective the opening statement is that Europeana is much more than a machine for mechanical accumulation of object representations but that one of its main characteristics should be to enable the generation of knowledge pertaining to cultural artefacts. The rest of the paper is about the implications of this initial statement in terms of information science, on the way we technically prepare to implement the necessary data structures and functionality and on the novel functionality Europeana will offer based on these elements and which go well beyond the 'traditional' digital library paradigm. However, prior to exploring these areas it may be useful to recall the notion of 'knowledge' that forms the basis of this contribution and which in turn is part of the well known continuum reaching from data via information and knowledge to wisdom.
    Content
    Vgl. unter: http://version1.europeana.eu/web/europeana-project/whitepapers.

Authors

Languages

  • e 1667
  • d 315
  • f 2
  • i 2
  • a 1
  • hu 1
  • More… Less…

Types

  • a 1715
  • el 236
  • m 154
  • s 60
  • x 27
  • r 13
  • n 8
  • b 6
  • i 3
  • ag 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications