Search (934 results, page 1 of 47)

  • × year_i:[2010 TO 2020}
  1. Dick, S.J.: Astronomy's Three Kingdom System : a comprehensive classification system of celestial objects (2019) 0.13
    0.12884901 = product of:
      0.32212254 = sum of:
        0.2646768 = weight(_text_:objects in 5455) [ClassicSimilarity], result of:
          0.2646768 = score(doc=5455,freq=8.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.82213306 = fieldWeight in 5455, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5455)
        0.05744573 = weight(_text_:22 in 5455) [ClassicSimilarity], result of:
          0.05744573 = score(doc=5455,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.2708308 = fieldWeight in 5455, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5455)
      0.4 = coord(2/5)
    
    Abstract
    Although classification has been an important aspect of astronomy since stellar spectroscopy in the late nineteenth century, to date no comprehensive classification system has existed for all classes of objects in the universe. Here we present such a system, and lay out its foundational definitions and principles. The system consists of the "Three Kingdoms" of planets, stars and galaxies, eighteen families, and eighty-two classes of objects. Gravitation is the defining organizing principle for the families and classes, and the physical nature of the objects is the defining characteristic of the classes. The system should prove useful for both scientific and pedagogical purposes.
    Date
    21.11.2019 18:46:22
  2. Viti, E.: My first ten years : nuovo soggettario growing, development and integration with other knowledge organization systems (2017) 0.13
    0.12823248 = product of:
      0.2137208 = sum of:
        0.09452744 = weight(_text_:objects in 4143) [ClassicSimilarity], result of:
          0.09452744 = score(doc=4143,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.29361898 = fieldWeight in 4143, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4143)
        0.07816069 = weight(_text_:books in 4143) [ClassicSimilarity], result of:
          0.07816069 = score(doc=4143,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.2669927 = fieldWeight in 4143, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4143)
        0.04103267 = weight(_text_:22 in 4143) [ClassicSimilarity], result of:
          0.04103267 = score(doc=4143,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.19345059 = fieldWeight in 4143, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4143)
      0.6 = coord(3/5)
    
    Abstract
    The Nuovo Soggettario is a subject indexing system edited by the Biblioteca Nazionale Centrale di Firenze. It was presented to librarians from across Italy on 8 February 2007 in Florence as a new edition of the Soggettario (1956), and it has become the official Italian subject indexing tool. This system is made up of two individual and interactive components: the general thesaurus, accessible on the web since 2007 and the rules of a conventional syntax for the construction of subject strings. The Nuovo soggettario thesaurus has grown significantly in terms of terminology and connections with other knowledge organization tools (e.g., encyclopedias, dictionaries, resources of archives and museums, and other information data sets), offering the users the possibility to browse through documents, books, objects, photographs, etc. The conversion of the Nuovo soggettario thesaurus into formats suitable for the semantic web and linked data world improves its function as an interlinking hub for direct searching and for organizing content by different professional communities. Thanks to structured data and the SKOS format, the Nuovo soggettario thesaurus is published on the Data Hub platform, thus giving broad visibility to the BNCF and its precious patrimony.
    Content
    Beitrag eines Special Issue: ISKO-Italy: 8' Incontro ISKO Italia, Università di Bologna, 22 maggio 2017, Bologna, Italia.
  3. Batorowska, H.; Kaminska-Czubala, B.: Information retrieval support : visualisation of the information space of a document (2014) 0.12
    0.12138016 = product of:
      0.20230025 = sum of:
        0.10694559 = weight(_text_:objects in 1444) [ClassicSimilarity], result of:
          0.10694559 = score(doc=1444,freq=4.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.33219194 = fieldWeight in 1444, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=1444)
        0.06252854 = weight(_text_:books in 1444) [ClassicSimilarity], result of:
          0.06252854 = score(doc=1444,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.21359414 = fieldWeight in 1444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.03125 = fieldNorm(doc=1444)
        0.032826133 = weight(_text_:22 in 1444) [ClassicSimilarity], result of:
          0.032826133 = score(doc=1444,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.15476047 = fieldWeight in 1444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=1444)
      0.6 = coord(3/5)
    
    Abstract
    Acquiring knowledge in any field involves information retrieval, i.e. searching the available documents to identify answers to the queries concerning the selected objects. Knowing the keywords which are names of the objects will enable situating the user's query in the information space organized as a thesaurus or faceted classification. Objectives: Identification the areas in the information space which correspond to gaps in the user's personal knowledge or in the domain knowledge might become useful in theory or practice. The aim of this paper is to present a realistic information-space model of a self-authored full-text document on information culture, indexed by the author of this article. Methodology: Having established the relations between the terms, particular modules (sets of terms connected by relations used in facet classification) are situated on a plain, similarly to a communication map. Conclusions drawn from the "journey" on the map, which is a visualization of the knowledge contained in the analysed document, are the crucial part of this paper. Results: The direct result of the research is the created model of information space visualization of a given document (book, article, website). The proposed procedure can practically be used as a new form of representation in order to map the contents of academic books and articles, beside the traditional index form, especially as an e-book auxiliary tool. In teaching, visualization of the information space of a document can be used to help students understand the issues of: classification, categorization and representation of new knowledge emerging in human mind.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  4. Hackett, P.M.W.: Facet theory and the mapping sentence : evolving philosophy, use and application (2014) 0.12
    0.11812608 = product of:
      0.1968768 = sum of:
        0.07562195 = weight(_text_:objects in 2258) [ClassicSimilarity], result of:
          0.07562195 = score(doc=2258,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.23489517 = fieldWeight in 2258, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=2258)
        0.08842871 = weight(_text_:books in 2258) [ClassicSimilarity], result of:
          0.08842871 = score(doc=2258,freq=4.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.30206773 = fieldWeight in 2258, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.03125 = fieldNorm(doc=2258)
        0.032826133 = weight(_text_:22 in 2258) [ClassicSimilarity], result of:
          0.032826133 = score(doc=2258,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.15476047 = fieldWeight in 2258, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=2258)
      0.6 = coord(3/5)
    
    Abstract
    This book brings together contemporary facet theory research to propose mapping sentences as a new way of understanding complex behavior, and suggests future directions the approach may take. How do we think about the worlds we live in? The formation of categories of events and objects seems to be a fundamental orientation procedure. Facet theory and its main tool, the mapping sentence, deal with categories of behavior and experience, their interrelationship, and their unification as our worldviews. In this book Hackett reviews philosophical writing along with neuroscientific research and information form other disciplines to provide a context for facet theory and the qualitative developments in this approach. With a variety of examples, the author proposes mapping sentences as a new way of understanding and defining complex behavior.
    Date
    17.10.2015 17:22:01
    LCSH
    Electronic books
    Subject
    Electronic books
  5. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.12
    0.11544335 = product of:
      0.57721674 = sum of:
        0.57721674 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
          0.57721674 = score(doc=973,freq=2.0), product of:
            0.51352155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.060570993 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.2 = coord(1/5)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  6. Holetschek, J. et al.: Natural history in Europeana : accessing scientific collection objects via LOD (2016) 0.11
    0.10844809 = product of:
      0.27112022 = sum of:
        0.18905488 = weight(_text_:objects in 3277) [ClassicSimilarity], result of:
          0.18905488 = score(doc=3277,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.58723795 = fieldWeight in 3277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.078125 = fieldNorm(doc=3277)
        0.08206534 = weight(_text_:22 in 3277) [ClassicSimilarity], result of:
          0.08206534 = score(doc=3277,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.38690117 = fieldWeight in 3277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3277)
      0.4 = coord(2/5)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  7. Sapon-White, R.: E-book cataloging workflows at Oregon State University (2014) 0.10
    0.10358652 = product of:
      0.2589663 = sum of:
        0.20972711 = weight(_text_:books in 2604) [ClassicSimilarity], result of:
          0.20972711 = score(doc=2604,freq=10.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.7164165 = fieldWeight in 2604, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.046875 = fieldNorm(doc=2604)
        0.0492392 = weight(_text_:22 in 2604) [ClassicSimilarity], result of:
          0.0492392 = score(doc=2604,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.23214069 = fieldWeight in 2604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2604)
      0.4 = coord(2/5)
    
    Abstract
    Among the many issues associated with integrating e-books into library collections and services, the revision of existing workflows in cataloging units has received little attention. The experience designing new workflows for e-books at Oregon State University Libraries since 2008 is described in detail from the perspective of three different sources of e-books. These descriptions highlight where the workflows applied to each vendor's stream differ. A workflow was developed for each vendor, based on the quality and source of available bibliographic records and the staff member performing the task. Involving cataloging staff as early as possible in the process of purchasing e-books from a new vendor ensures that a suitable workflow can be designed and implemented as soon as possible. This ensures that the representation of e-books in the library catalog is not delayed, increasing the likelihood that users will readily find and use these resources that the library has purchased.
    Date
    10. 9.2000 17:38:22
  8. Benoit, G.; Hussey, L.: Repurposing digital objects : case studies across the publishing industry (2011) 0.10
    0.09784021 = product of:
      0.24460052 = sum of:
        0.18715478 = weight(_text_:objects in 4198) [ClassicSimilarity], result of:
          0.18715478 = score(doc=4198,freq=4.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.5813359 = fieldWeight in 4198, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4198)
        0.05744573 = weight(_text_:22 in 4198) [ClassicSimilarity], result of:
          0.05744573 = score(doc=4198,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.2708308 = fieldWeight in 4198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4198)
      0.4 = coord(2/5)
    
    Abstract
    Large, data-rich organizations have tremendously large collections of digital objects to be "repurposed," to respond quickly and economically to publishing, marketing, and information needs. Some management typically assume that a content management system, or some other technique such as OWL and RDF, will automatically address the workflow and technical issues associated with this reuse. Four case studies show that the sources of some roadblocks to agile repurposing are as much managerial and organizational as they are technical in nature. The review concludes with suggestions on how digital object repurposing can be integrated given these organizations' structures.
    Date
    22. 1.2011 14:23:07
  9. Welsh, A.: ¬The rare books catalog and the scholarly database (2016) 0.10
    0.09670534 = product of:
      0.24176335 = sum of:
        0.1323384 = weight(_text_:objects in 5128) [ClassicSimilarity], result of:
          0.1323384 = score(doc=5128,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.41106653 = fieldWeight in 5128, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5128)
        0.10942495 = weight(_text_:books in 5128) [ClassicSimilarity], result of:
          0.10942495 = score(doc=5128,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.37378973 = fieldWeight in 5128, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5128)
      0.4 = coord(2/5)
    
    Abstract
    The article is a researcher's eye view of the value of the library catalog not only as a database to be searched for surrogates of objects of study, but as a corpus of text that can be analyzed in its own right, or incorporated within the researcher's own research database. Barriers are identified in the ways in which catalog data can be output and the technical skills researchers currently need to download, ingest, and manipulate data. Research tools and datasets created by, or in collaboration with, the library community are identified.
  10. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.10
    0.0962028 = product of:
      0.48101398 = sum of:
        0.48101398 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
          0.48101398 = score(doc=1826,freq=2.0), product of:
            0.51352155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.060570993 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.2 = coord(1/5)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  11. Torres-Salinas, D.; Gorraiz, J.; Robinson-Garcia, N.: ¬The insoluble problems of books : what does Altmetric.com have to offer? (2018) 0.10
    0.09608394 = product of:
      0.24020985 = sum of:
        0.20738372 = weight(_text_:books in 4633) [ClassicSimilarity], result of:
          0.20738372 = score(doc=4633,freq=22.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.70841163 = fieldWeight in 4633, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.03125 = fieldNorm(doc=4633)
        0.032826133 = weight(_text_:22 in 4633) [ClassicSimilarity], result of:
          0.032826133 = score(doc=4633,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.15476047 = fieldWeight in 4633, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=4633)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The purpose of this paper is to analyze the capabilities, functionalities and appropriateness of Altmetric.com as a data source for the bibliometric analysis of books in comparison to PlumX. Design/methodology/approach The authors perform an exploratory analysis on the metrics the Altmetric Explorer for Institutions, platform offers for books. The authors use two distinct data sets of books. On the one hand, the authors analyze the Book Collection included in Altmetric.com. On the other hand, the authors use Clarivate's Master Book List, to analyze Altmetric.com's capabilities to download and merge data with external databases. Finally, the authors compare the findings with those obtained in a previous study performed in PlumX. Findings Altmetric.com combines and orderly tracks a set of data sources combined by DOI identifiers to retrieve metadata from books, being Google Books its main provider. It also retrieves information from commercial publishers and from some Open Access initiatives, including those led by university libraries, such as Harvard Library. We find issues with linkages between records and mentions or ISBN discrepancies. Furthermore, the authors find that automatic bots affect greatly Wikipedia mentions to books. The comparison with PlumX suggests that none of these tools provide a complete picture of the social attention generated by books and are rather complementary than comparable tools. Practical implications This study targets different audience which can benefit from the findings. First, bibliometricians and researchers who seek for alternative sources to develop bibliometric analyses of books, with a special focus on the Social Sciences and Humanities fields. Second, librarians and research managers who are the main clients to which these tools are directed. Third, Altmetric.com itself as well as other altmetric providers who might get a better understanding of the limitations users encounter and improve this promising tool. Originality/value This is the first study to analyze Altmetric.com's functionalities and capabilities for providing metric data for books and to compare results from this platform, with those obtained via PlumX.
    Content
    Teil eines Special Issue: Scholarly books and their evaluation context in the social sciences and humanities. Vgl.: https://doi.org/10.1108/AJIM-06-2018-0152.
    Date
    20. 1.2015 18:30:22
  12. Homan, P.A.: Library catalog notes for "bad books" : ethics vs. responsibilities (2012) 0.09
    0.092994586 = product of:
      0.23248646 = sum of:
        0.19145378 = weight(_text_:books in 420) [ClassicSimilarity], result of:
          0.19145378 = score(doc=420,freq=12.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.6539958 = fieldWeight in 420, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=420)
        0.04103267 = weight(_text_:22 in 420) [ClassicSimilarity], result of:
          0.04103267 = score(doc=420,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.19345059 = fieldWeight in 420, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=420)
      0.4 = coord(2/5)
    
    Abstract
    The conflict between librarians' ethics and their responsibilities in the process of progressive collection management, which applies the principles of cost accounting to libraries, to call attention to the "bad books" in their collections that are compromised by age, error, abridgement, expurgation, plagiarism, copyright violation, libel, or fraud, is discussed. According to Charles Cutter, notes in catalog records should call attention to the best books but ignore the bad ones. Libraries that can afford to keep their "bad books," however, which often have a valuable second life, must call attention to their intellectual contexts in notes in the catalog records. Michael Bellesiles's Arming America, the most famous case of academic fraud at the turn of the twenty-first century, is used as a test case. Given the bias of content enhancement that automatically pulls content from the Web into library catalogs, catalog notes for "bad books" may be the only way for librarians to uphold their ethical principles regarding collection management while fulfilling their professional responsibilities to their users in calling attention to their "bad books."
    Date
    27. 9.2012 14:22:00
  13. Katona, J.M.: ¬The cultural and historical contexts of ornamental prints published in the nineteenth and twentieth centuries in Europe : a case study for the standardized description of museum objects (2017) 0.08
    0.08473707 = product of:
      0.21184267 = sum of:
        0.13368198 = weight(_text_:objects in 4135) [ClassicSimilarity], result of:
          0.13368198 = score(doc=4135,freq=4.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.41523993 = fieldWeight in 4135, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4135)
        0.07816069 = weight(_text_:books in 4135) [ClassicSimilarity], result of:
          0.07816069 = score(doc=4135,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.2669927 = fieldWeight in 4135, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4135)
      0.4 = coord(2/5)
    
    Abstract
    The study focuses on ornamental prints (as components of pattern books) published and circulated all over Europe during the second half of the nineteenth century and in the first decades of the twentieth century. This special object type forms a particular segment not only in the history of ornamental prints and decorative arts in general but also in the history of architecture, applied arts, art education and archaeology. Enriched descriptions of these prints therefore have the potential to be of great benefit to scientific research in all the disciplines mentioned. The primary aim of this study is to survey and elaborate the standardized description of ornamental prints, considering them as visual works and describing them as museum objects. The paper attempts to answer questions posed from the multi-layered approach to scientific research, namely how to record ornamental prints that belong to a special object type, consisting of mixed visual and textual contents; and how to group the information in order to obtain the richest possible sets of data. The conceptual model of the standardized description will be elucidated with numerous examples, embedded in the broader art historical context.
  14. Ridenour, L.: Boundary objects : measuring gaps and overlap between research areas (2016) 0.08
    0.083863035 = product of:
      0.20965758 = sum of:
        0.16041838 = weight(_text_:objects in 2835) [ClassicSimilarity], result of:
          0.16041838 = score(doc=2835,freq=4.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.49828792 = fieldWeight in 2835, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2835)
        0.0492392 = weight(_text_:22 in 2835) [ClassicSimilarity], result of:
          0.0492392 = score(doc=2835,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.23214069 = fieldWeight in 2835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2835)
      0.4 = coord(2/5)
    
    Abstract
    The aim of this paper is to develop methodology to determine conceptual overlap between research areas. It investigates patterns of terminology usage in scientific abstracts as boundary objects between research specialties. Research specialties were determined by high-level classifications assigned by Thomson Reuters in their Essential Science Indicators file, which provided a strictly hierarchical classification of journals into 22 categories. Results from the query "network theory" were downloaded from the Web of Science. From this file, two top-level groups, economics and social sciences, were selected and topically analyzed to provide a baseline of similarity on which to run an informetric analysis. The Places & Spaces Map of Science (Klavans and Boyack 2007) was used to determine the proximity of disciplines to one another in order to select the two disciplines use in the analysis. Groups analyzed share common theories and goals; however, groups used different language to describe their research. It was found that 61% of term words were shared between the two groups.
  15. Maxwell, R.L.: Handbook for RDA : Maxwell's handbook for RDA ; explaining and illustrating RDA: resource description and access using MARC 21 (2013) 0.08
    0.082890294 = product of:
      0.20722574 = sum of:
        0.11343292 = weight(_text_:objects in 2085) [ClassicSimilarity], result of:
          0.11343292 = score(doc=2085,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.35234275 = fieldWeight in 2085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2085)
        0.09379282 = weight(_text_:books in 2085) [ClassicSimilarity], result of:
          0.09379282 = score(doc=2085,freq=2.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.3203912 = fieldWeight in 2085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.046875 = fieldNorm(doc=2085)
      0.4 = coord(2/5)
    
    Content
    IntroductionDescribing manifestations and items -- Describing persons -- Describing families -- Describing corporate bodies -- Describing geographic entities -- Describing works -- Describing expressions -- Recording relationships -- Appendix A. Printed books and sheets -- Appendix B. Cartographic resources -- Appendix C. Unpublished manuscripts and manuscript collections -- Appendix D. Notated music -- Appendix E. Audio recordings -- Appendix F. Moving image resources -- Appendix G. Two-dimensional graphic resources -- Appendix H. Three-dimensional resources and objects -- Appendix I. Digital resources -- Appendix J. Microform resources -- Appendix K. Bibliographic records serials and integrating resources -- Appendix L. Analytical description.
  16. Ortega, C.D.: Conceptual and procedural grounding of documentary systems (2012) 0.08
    0.0819036 = product of:
      0.204759 = sum of:
        0.16372633 = weight(_text_:objects in 143) [ClassicSimilarity], result of:
          0.16372633 = score(doc=143,freq=6.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.508563 = fieldWeight in 143, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=143)
        0.04103267 = weight(_text_:22 in 143) [ClassicSimilarity], result of:
          0.04103267 = score(doc=143,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.19345059 = fieldWeight in 143, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=143)
      0.4 = coord(2/5)
    
    Abstract
    Documentary activities are informational operations of selection and representation of objects made from their features and predictable use. In order to make them more dynamic, these activities are carried out systemically, according to institutionally limited (in the sense of social institution) information projects. This organic approach leads to the constitution of information systems, or, more specifically, systems of documentary information, inasmuch as they refer to actions about documents as objects from which information is produced. Thus, systems of documentary information are called documentary systems. This article aims to list and systematize elements with the potential to a generalizing and categorical approach of documentary systems. We approach the systems according to: elements of reference (the documents and their information, the users, and the institutional context); constitutive elements (collection and references); structural elements (constituent units and the relation among them); modes of production (pre or post representation of the document); management aspects (flow of documents and of their information); and, finally, typology (management systems and information retrieval systems). Thus, documentary systems can be considered products due to operations involving objects institutionally limited for the production of collections (virtual or not) and their references, whose objective is the appropriation of information by the user.
    Content
    Beitrag einer Section "Selected Papers from the 1ST Brazilian Conference on Knowledge Organization And Representation, Faculdade de Ciência da Informação, Campus Universitário Darcy Ribeiro Brasília, DF Brasil, October 20-22, 2011" Vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_39_2012_3_h.pdf.
  17. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.08
    0.07930445 = product of:
      0.19826111 = sum of:
        0.16543499 = weight(_text_:books in 3608) [ClassicSimilarity], result of:
          0.16543499 = score(doc=3608,freq=14.0), product of:
            0.29274467 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.060570993 = queryNorm
            0.565117 = fieldWeight in 3608, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.032826133 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
          0.032826133 = score(doc=3608,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.15476047 = fieldWeight in 3608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
      0.4 = coord(2/5)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
    Object
    Google books
    Source
    https://www.theatlantic.com/technology/archive/2017/04/the-tragedy-of-google-books/523320/
  18. Perugini, S.: Supporting multiple paths to objects in information hierarchies : faceted classification, faceted search, and symbolic links (2010) 0.08
    0.07591366 = product of:
      0.18978414 = sum of:
        0.1323384 = weight(_text_:objects in 4227) [ClassicSimilarity], result of:
          0.1323384 = score(doc=4227,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.41106653 = fieldWeight in 4227, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4227)
        0.05744573 = weight(_text_:22 in 4227) [ClassicSimilarity], result of:
          0.05744573 = score(doc=4227,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.2708308 = fieldWeight in 4227, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4227)
      0.4 = coord(2/5)
    
    Source
    Information processing and management. 46(2010) no.1, S.22-43
  19. Madalli, D.P.; Balaji, B.P.; Sarangi, A.K.: Music domain analysis for building faceted ontological representation (2014) 0.08
    0.07591366 = product of:
      0.18978414 = sum of:
        0.1323384 = weight(_text_:objects in 1437) [ClassicSimilarity], result of:
          0.1323384 = score(doc=1437,freq=2.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.41106653 = fieldWeight in 1437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1437)
        0.05744573 = weight(_text_:22 in 1437) [ClassicSimilarity], result of:
          0.05744573 = score(doc=1437,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.2708308 = fieldWeight in 1437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1437)
      0.4 = coord(2/5)
    
    Abstract
    This paper describes to construct faceted ontologies for domain modeling. Building upon the faceted theory of S.R. Ranganathan (1967), the paper intends to address the faceted classification approach applied to build domain ontologies. As classificatory ontologies are employed to represent the relationships of entities and objects on the web, the faceted approach helps to analyze domain representation in an effective way for modeling. Based on this perspective, an ontology of the music domain has been analyzed that would serve as a case study.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  20. Dufour, C.; Bartlett, J.C.; Toms, E.G.: Understanding how webcasts are used as sources of information (2011) 0.07
    0.069885865 = product of:
      0.17471465 = sum of:
        0.13368198 = weight(_text_:objects in 4195) [ClassicSimilarity], result of:
          0.13368198 = score(doc=4195,freq=4.0), product of:
            0.32193914 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.060570993 = queryNorm
            0.41523993 = fieldWeight in 4195, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4195)
        0.04103267 = weight(_text_:22 in 4195) [ClassicSimilarity], result of:
          0.04103267 = score(doc=4195,freq=2.0), product of:
            0.2121093 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.060570993 = queryNorm
            0.19345059 = fieldWeight in 4195, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4195)
      0.4 = coord(2/5)
    
    Abstract
    Webcasting systems were developed to provide remote access in real-time to live events. Today, these systems have an additional requirement: to accommodate the "second life" of webcasts as archival information objects. Research to date has focused on facilitating the production and storage of webcasts as well as the development of more interactive and collaborative multimedia tools to support the event, but research has not examined how people interact with a webcasting system to access and use the contents of those archived events. Using an experimental design, this study examined how 16 typical users interact with a webcasting system to respond to a set of information tasks: selecting a webcast, searching for specific information, and making a gist of a webcast. Using several data sources that included user actions, user perceptions, and user explanations of their actions and decisions, the study also examined the strategies employed to complete the tasks. The results revealed distinctive system-use patterns for each task and provided insights into the types of tools needed to make webcasting systems better suited for also using the webcasts as information objects.
    Date
    22. 1.2011 14:16:14

Languages

  • e 719
  • d 201
  • a 1
  • f 1
  • hu 1
  • m 1
  • pt 1
  • More… Less…

Types

  • a 794
  • el 89
  • m 88
  • s 24
  • x 14
  • r 9
  • b 5
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications