Search (603 results, page 2 of 31)

  • × type_ss:"el"
  1. Danskin, A.: Linked and open data : RDA and bibliographic control (2012) 0.04
    0.037554603 = product of:
      0.09388651 = sum of:
        0.06402116 = weight(_text_:bibliographic in 304) [ClassicSimilarity], result of:
          0.06402116 = score(doc=304,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.3649729 = fieldWeight in 304, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=304)
        0.029865343 = product of:
          0.059730686 = sum of:
            0.059730686 = weight(_text_:data in 304) [ClassicSimilarity], result of:
              0.059730686 = score(doc=304,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.4192326 = fieldWeight in 304, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=304)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    RDA: Resource Description and Access is a new cataloguing standard which will replace the Anglo-American Cataloguing Rules, 2nd edition, which has been widely used in libraries since 1981. RDA, like AACR2, is a content standard providing guidance and instruction on how to identify and record attributes or properties of resources which are significant for discovery. However, RDA is also an implementation of the FRBR and FRAD models. The RDA element set and vocabularies are being published on the Open Metadata Registry as linked open data. RDA provides a rich vocabulary for the description of resources and for expressing relationships between them. This paper describes what RDA offers and considers the challenges and potential of linked open data in the broader framework of bibliographic control.
    Content
    Text of presentations held at the international seminar "Global Interoperability and Linked Data in Libraries", Firenze, June 18-19, 2012.
  2. Leresche, F.; Boulet, V.: RDA as a tool for the bibliographic transition : the French position (2016) 0.04
    0.037554603 = product of:
      0.09388651 = sum of:
        0.06402116 = weight(_text_:bibliographic in 2953) [ClassicSimilarity], result of:
          0.06402116 = score(doc=2953,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.3649729 = fieldWeight in 2953, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=2953)
        0.029865343 = product of:
          0.059730686 = sum of:
            0.059730686 = weight(_text_:data in 2953) [ClassicSimilarity], result of:
              0.059730686 = score(doc=2953,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.4192326 = fieldWeight in 2953, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2953)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article presents the process adopted by the France to bring library catalogs to the Web of data and the RDA role in this general strategy. After analising RDA limits and inconsistencies, inherited from the tradition of AACR and MARC21 catalogues, the authors present the French approach to RDA and its positioning in correlation to international standards like ISBD and FRBR. The method adopted in France for FRBRising the catalogues go through a technical work of creating alignment beteween existing data, exploiting the technologies applied to the creation of data.bnf.fr and through a revision of the French cataloguing rules, allowing FRBRised metadata creation. This revision is based on RDA and it is setting up a French RDA application profile, keeping the analysis on the greater differences. RDA adoption, actually, is not a crucial issue in France and not a self standing purpose; it is just a tool for the transition of bibliographic data towards the Web of data.
  3. Hjoerland, B.: Arguments for 'the bibliographical paradigm' : some thoughts inspired by the new English edition of the UDC (2007) 0.04
    0.037336905 = product of:
      0.09334226 = sum of:
        0.07840959 = weight(_text_:bibliographic in 552) [ClassicSimilarity], result of:
          0.07840959 = score(doc=552,freq=6.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.44699866 = fieldWeight in 552, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=552)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 552) [ClassicSimilarity], result of:
              0.029865343 = score(doc=552,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=552)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The term 'the bibliographic paradigm' is used in the literature of library and information science, but is a very seldom term and is almost always negatively described. This paper reconsiders this concept. Method. The method is mainly 'analytical'. Empirical data concerning the current state of the UDC-classification system are also presented in order to illuminate the connection between theory and practice. Analysis. The bibliographic paradigm is understood as a perspective in library and information science focusing on documents and information resources, their description, organization, mediation and use. This perspective is examined as one among other metatheories of library and information science and its philosophical assumptions and implications are outlined. Results. The neglect and misunderstanding of 'the bibliographic paradigm' as well as the quality of the new UDC-classification indicate that both the metatheoretical discourses on library and information science and its concrete practice seem to be in a state of crisis.
  4. Report on the future of bibliographic control : draft for public comment (2007) 0.04
    0.037336905 = product of:
      0.09334226 = sum of:
        0.07840959 = weight(_text_:bibliographic in 1271) [ClassicSimilarity], result of:
          0.07840959 = score(doc=1271,freq=24.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.44699866 = fieldWeight in 1271, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 1271) [ClassicSimilarity], result of:
              0.029865343 = score(doc=1271,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 1271, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1271)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The future of bibliographic control will be collaborative, decentralized, international in scope, and Web-based. Its realization will occur in cooperation with the private sector, and with the active collaboration of library users. Data will be gathered from multiple sources; change will happen quickly; and bibliographic control will be dynamic, not static. The underlying technology that makes this future possible and necessary-the World Wide Web-is now almost two decades old. Libraries must continue the transition to this future without delay in order to retain their relevance as information providers. The Working Group on the Future of Bibliographic Control encourages the library community to take a thoughtful and coordinated approach to effecting significant changes in bibliographic control. Such an approach will call for leadership that is neither unitary nor centralized. Nor will the responsibility to provide such leadership fall solely to the Library of Congress (LC). That said, the Working Group recognizes that LC plays a unique role in the library community of the United States, and the directions that LC takes have great impact on all libraries. We also recognize that there are many other institutions and organizations that have the expertise and the capacity to play significant roles in the bibliographic future. Wherever possible, those institutions must step forward and take responsibility for assisting with navigating the transition and for playing appropriate ongoing roles after that transition is complete. To achieve the goals set out in this document, we must look beyond individual libraries to a system wide deployment of resources. We must realize efficiencies in order to be able to reallocate resources from certain lower-value components of the bibliographic control ecosystem into other higher-value components of that same ecosystem. The recommendations in this report are directed at a number of parties, indicated either by their common initialism (e.g., "LC" for Library of Congress, "PCC" for Program for Cooperative Cataloging) or by their general category (e.g., "Publishers," "National Libraries"). When the recommendation is addressed to "All," it is intended for the library community as a whole and its close collaborators.
    The Library of Congress must begin by prioritizing the recommendations that are directed in whole or in part at LC. Some define tasks that can be achieved immediately and with moderate effort; others will require analysis and planning that will have to be coordinated broadly and carefully. The Working Group has consciously not associated time frames with any of its recommendations. The recommendations fall into five general areas: 1. Increase the efficiency of bibliographic production for all libraries through increased cooperation and increased sharing of bibliographic records, and by maximizing the use of data produced throughout the entire "supply chain" for information resources. 2. Transfer effort into higher-value activity. In particular, expand the possibilities for knowledge creation by "exposing" rare and unique materials held by libraries that are currently hidden from view and, thus, underused. 3. Position our technology for the future by recognizing that the World Wide Web is both our technology platform and the appropriate platform for the delivery of our standards. Recognize that people are not the only users of the data we produce in the name of bibliographic control, but so too are machine applications that interact with those data in a variety of ways. 4. Position our community for the future by facilitating the incorporation of evaluative and other user-supplied information into our resource descriptions. Work to realize the potential of the FRBR framework for revealing and capitalizing on the various relationships that exist among information resources. 5. Strengthen the library profession through education and the development of metrics that will inform decision-making now and in the future. The Working Group intends what follows to serve as a broad blueprint for the Library of Congress and its colleagues in the library and information technology communities for extending and promoting access to information resources.
    Editor
    Library of Congress / Working Group on the Future of Bibliographic Control
    Source
    http://www.loc.gov/bibliographic-future/news/lcwg-report-draft-11-30-07-final.pdf
  5. Godby, C.J.; Young, J.A.; Childress, E.: ¬A repository of metadata crosswalks (2004) 0.04
    0.037206076 = product of:
      0.18603037 = sum of:
        0.18603037 = weight(_text_:readable in 1155) [ClassicSimilarity], result of:
          0.18603037 = score(doc=1155,freq=4.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.67199206 = fieldWeight in 1155, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
      0.2 = coord(1/5)
    
    Abstract
    This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community.
  6. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.04
    0.03696416 = product of:
      0.09241039 = sum of:
        0.075167626 = weight(_text_:readable in 4709) [ClassicSimilarity], result of:
          0.075167626 = score(doc=4709,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.2715258 = fieldWeight in 4709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.03125 = fieldNorm(doc=4709)
        0.017242765 = product of:
          0.03448553 = sum of:
            0.03448553 = weight(_text_:data in 4709) [ClassicSimilarity], result of:
              0.03448553 = score(doc=4709,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24204408 = fieldWeight in 4709, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  7. Takhirov, N.; Aalberg, T.; Duchateau, F.; Zumer, M.: FRBR-ML: a FRBR-based framework for semantic interoperability (2012) 0.04
    0.03540682 = product of:
      0.08851705 = sum of:
        0.060359728 = weight(_text_:bibliographic in 134) [ClassicSimilarity], result of:
          0.060359728 = score(doc=134,freq=8.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 134, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=134)
        0.028157318 = product of:
          0.056314636 = sum of:
            0.056314636 = weight(_text_:data in 134) [ClassicSimilarity], result of:
              0.056314636 = score(doc=134,freq=16.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.3952563 = fieldWeight in 134, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=134)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Metadata related to cultural items such as literature, music and movies is a valuable resource that is currently exploited in many applications and services based on semantic web technologies. A vast amount of such information has been created by memory institutions in the last decades using different standard or ad hoc schemas, and a main challenge is to make this legacy data accessible as reusable semantic data. On one hand, this is a syntactic problem that can be solved by transforming to formats that are compatible with the tools and services used for semantic aware services. On the other hand, this is a semantic problem. Simply transforming from one format to another does not automatically enable semantic interoperability and legacy data often needs to be reinterpreted as well as transformed. The conceptual model in the Functional Requirements for Bibliographic Records, initially developed as a conceptual framework for library standards and systems, is a major step towards a shared semantic model of the products of artistic and intellectual endeavor of mankind. The model is generally accepted as sufficiently generic to serve as a conceptual framework for a broad range of cultural heritage metadata. Unfortunately, the existing large body of legacy data makes a transition to this model difficult. For instance, most bibliographic data is still only available in various MARC-based formats which is hard to render into reusable and meaningful semantic data. Making legacy bibliographic data accessible as semantic data is a complex problem that includes interpreting and transforming the information. In this article, we present our work on transforming and enhancing legacy bibliographic information into a representation where the structure and semantics of the FRBR model is explicit.
  8. Buttò, S.: RDA: analyses, considerations and activities by the Central Institute for the Union Catalogue of Italian Libraries and Bibliographic Information (ICCU) (2016) 0.04
    0.035157423 = product of:
      0.08789355 = sum of:
        0.07544966 = weight(_text_:bibliographic in 2958) [ClassicSimilarity], result of:
          0.07544966 = score(doc=2958,freq=8.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.43012467 = fieldWeight in 2958, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2958)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 2958) [ClassicSimilarity], result of:
              0.024887787 = score(doc=2958,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 2958, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2958)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The report aims to analyze the applicability of the Resource Description and Access (RDA) within the Italian public libraries, and also in the archives and museums in order to contribute to the discussion at international level. The Central Institute for the Union Catalogue of Italian libraries (ICCU) manages the online catalogue of the Italian libraries and the network of bibliographic services. ICCU has the institutional task of coordinating the cataloging and the documentation activities for the Italian libraries. On March 31 st 2014, the Institute signed the Agreement with the American Library Association,Publishing ALA, for the Italian translation rights of RDA, now available and published inRDAToolkit. The Italian translation has been carried out and realized by the Technical Working Group, made up of the main national and academic libraries, cultural Institutions and bibliographic agencies. The Group started working from the need of studying the new code in its textual detail, to better understand the principles, purposes, and applicability and finally its sustainability within the national context in relation to the area of the bibliographic control. At international level, starting from the publication of the Italian version of RDA and through the research carried out by ICCU and by the national Working Groups, the purpose is a more direct comparison with the experiences of the other European countries, also within EURIG international context, for an exchange of experiences aimed at strengthening the informational content of the data cataloging, with respect to history, cultural traditions and national identities of the different countries.
  9. Edmunds, J.: Zombrary apocalypse!? : RDA, LRM, and the death of cataloging (2017) 0.03
    0.03355215 = product of:
      0.08388038 = sum of:
        0.073925264 = weight(_text_:bibliographic in 3818) [ClassicSimilarity], result of:
          0.073925264 = score(doc=3818,freq=12.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.42143437 = fieldWeight in 3818, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=3818)
        0.009955115 = product of:
          0.01991023 = sum of:
            0.01991023 = weight(_text_:data in 3818) [ClassicSimilarity], result of:
              0.01991023 = score(doc=3818,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.1397442 = fieldWeight in 3818, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3818)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A brochure on RDA issued in 2010 includes the statements that "RDA goes beyond earlier cataloguing codes in that it provides guidelines on cataloguing digital resources and a stronger emphasis on helping users find, identify, select, and obtain the information they want. RDA also supports clustering of bibliographic records to show relationships between works and their creators. This important new feature makes users more aware of a work's different editions, translations, or physical formats - an exciting development." Setting aside the fact that the author(s) of these statements and I differ on the definition of exciting, their claims are, at best, dubious. There is no evidence-empirical or anecdotal-that bibliographic records created using RDA are any better than records created using AACR2 (or AACR, for that matter) in "helping users find, identify, select, and obtain the information they want." The claim is especially unfounded in the context of the current discovery ecosystem, in which users are perfectly capable of finding, identifying, selecting, and obtaining information with absolutely no assistance from libraries or the bibliographic data libraries create.
    Equally fallacious is the statement that support for the "clustering bibliographic records to show relationships between works and their creators" is an "important new feature" of RDA. AACR2 bibliographic records and the systems housing them can, did, and do show such relationships. Finally, whether users want or care to be made "more aware of a work's different editions, translations, or physical formats" is debatable. As an aim, it sounds less like what a user wants and more like what a cataloging librarian thinks a user should want. As Amanda Cossham writes in her recently issued doctoral thesis: "The explicit focus on user needs in the FRBR model, the International Cataloguing Principles, and RDA: Resource Description and Access does not align well with the ways that users use, understand, and experience library catalogues nor with the ways that they understand and experience the wider information environment. User tasks, as constituted in the FRBR model and RDA, are insufficient to meet users' needs." (p. 11, emphasis in the original)
    The point of this paper is not to critique RDA (a futile task, since RDA is here to stay), but to make plain that its claim to be a solution to the challenge(s) of bibliographic description in the Internet Age is unfounded, and, secondarily, to explain why such wild claims continue to be advanced and go unchallenged by the rank and file of career catalogers.
  10. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.03
    0.033142954 = product of:
      0.082857385 = sum of:
        0.030179864 = weight(_text_:bibliographic in 4796) [ClassicSimilarity], result of:
          0.030179864 = score(doc=4796,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.17204987 = fieldWeight in 4796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
        0.05267752 = product of:
          0.10535504 = sum of:
            0.10535504 = weight(_text_:data in 4796) [ClassicSimilarity], result of:
              0.10535504 = score(doc=4796,freq=56.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.7394569 = fieldWeight in 4796, product of:
                  7.483315 = tf(freq=56.0), with freq of:
                    56.0 = termFreq=56.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4796)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
    Key recommendations of the report are: - That library leaders identify sets of data as possible candidates for early exposure as Linked Data and foster a discussion about Open Data and rights; - That library standards bodies increase library participation in Semantic Web standardization, develop library data standards that are compatible with Linked Data, and disseminate best-practice design patterns tailored to library Linked Data; - That data and systems designers design enhanced user services based on Linked Data capabilities, create URIs for the items in library datasets, develop policies for managing RDF vocabularies and their URIs, and express library data by re-using or mapping to existing Linked Data vocabularies; - That librarians and archivists preserve Linked Data element sets and value vocabularies and apply library experience in curation and long-term preservation to Linked Data datasets.
  11. Parent, I.: ¬The importance of national bibliographies in the digital age (2007) 0.03
    0.032107983 = product of:
      0.080269955 = sum of:
        0.060359728 = weight(_text_:bibliographic in 687) [ClassicSimilarity], result of:
          0.060359728 = score(doc=687,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=687)
        0.01991023 = product of:
          0.03982046 = sum of:
            0.03982046 = weight(_text_:data in 687) [ClassicSimilarity], result of:
              0.03982046 = score(doc=687,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2794884 = fieldWeight in 687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=687)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Technological developments are introducing both challenges and opportunities for the future production of national bibliographies. There are new complex issues which must be addressed collectively by national bibliographic agencies. As an international community, we must consider new methods and models for the on-going provision of authoritative data in national bibliographies, which continue to play an essential role in the control of and access to each country's published heritage.
  12. Landry, P.: Providing multilingual subject access through linking of subject heading languages : the MACS approach (2009) 0.03
    0.032107983 = product of:
      0.080269955 = sum of:
        0.060359728 = weight(_text_:bibliographic in 2787) [ClassicSimilarity], result of:
          0.060359728 = score(doc=2787,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 2787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=2787)
        0.01991023 = product of:
          0.03982046 = sum of:
            0.03982046 = weight(_text_:data in 2787) [ClassicSimilarity], result of:
              0.03982046 = score(doc=2787,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2794884 = fieldWeight in 2787, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2787)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The MACS project aims at providing multilingual subject access to library catalogues through the use of concordances between subject headings from LCSH, RAMEAU and SWD. The manual approach, as used by MACS, has been up to now the most reliable method for ensuring accurate multilingual subject access to bibliographic data. The presentation will give an overview on the development of the project and will outline the strategy and methods used by the MACS project. The presentation will also include a demonstration of the search interface developed by The European Library (TEL).
  13. Patton, G.E.: From FRBR to FRAD : extending the Model (2009) 0.03
    0.032107983 = product of:
      0.080269955 = sum of:
        0.060359728 = weight(_text_:bibliographic in 3030) [ClassicSimilarity], result of:
          0.060359728 = score(doc=3030,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 3030, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=3030)
        0.01991023 = product of:
          0.03982046 = sum of:
            0.03982046 = weight(_text_:data in 3030) [ClassicSimilarity], result of:
              0.03982046 = score(doc=3030,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2794884 = fieldWeight in 3030, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3030)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A report on the completion of the work of the IFLA Working Group on Functional Requirements and Numbering of Authority Records which was charged by the IFLA Division of Bibliographic Control to extend the FRBR model to authority data.
  14. Resource Description & Access (RDA) (o.J.) 0.03
    0.032107983 = product of:
      0.080269955 = sum of:
        0.060359728 = weight(_text_:bibliographic in 2438) [ClassicSimilarity], result of:
          0.060359728 = score(doc=2438,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 2438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=2438)
        0.01991023 = product of:
          0.03982046 = sum of:
            0.03982046 = weight(_text_:data in 2438) [ClassicSimilarity], result of:
              0.03982046 = score(doc=2438,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2794884 = fieldWeight in 2438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2438)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    RDA Blog or Resource Description & Access Blog is a blog on Resource Description and Access (RDA), a new library cataloging standard that provides instructions and guidelines on formulating data for resource description and discovery, organized based on the Functional Requirements for Bibliographic Records (FRBR), intended for use by libraries and other cultural organizations replacing Anglo-American Cataloguing Rules (AACR2). Free for everyone Forever.
  15. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.03
    0.030838627 = product of:
      0.15419313 = sum of:
        0.15419313 = sum of:
          0.10535504 = weight(_text_:data in 4261) [ClassicSimilarity], result of:
            0.10535504 = score(doc=4261,freq=14.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.7394569 = fieldWeight in 4261, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0625 = fieldNorm(doc=4261)
          0.04883809 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
            0.04883809 = score(doc=4261,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.30952093 = fieldWeight in 4261, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4261)
      0.2 = coord(1/5)
    
    Date
    17. 7.2002 19:22:06
    RSWK
    Data-warehouse-Konzept / Lehrbuch
    Data mining / Lehrbuch
    Subject
    Data-warehouse-Konzept / Lehrbuch
    Data mining / Lehrbuch
    Theme
    Data Mining
  16. Danowski, P.: Step one: blow up the silo! : Open bibliographic data, the first step towards Linked Open Data (2010) 0.03
    0.030054057 = product of:
      0.07513514 = sum of:
        0.045269795 = weight(_text_:bibliographic in 3962) [ClassicSimilarity], result of:
          0.045269795 = score(doc=3962,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.2580748 = fieldWeight in 3962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3962)
        0.029865343 = product of:
          0.059730686 = sum of:
            0.059730686 = weight(_text_:data in 3962) [ClassicSimilarity], result of:
              0.059730686 = score(doc=3962,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.4192326 = fieldWeight in 3962, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3962)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    More and more libraries starting semantic web projects. The question about the license of the data is not discussed or the discussion is deferred to the end of project. In this paper is discussed why the question of the license is so important in context of the semantic web that is should be one of the first aspects in a semantic web project. Also it will be shown why a public domain weaver is the only solution that fulfill the the special requirements of the semantic web and that guaranties the reuseablitly of semantic library data for a sustainability of the projects.
  17. Mitchell, J.S.; Panzer, M.: Dewey linked data : Making connections with old friends and new acquaintances (2012) 0.03
    0.030022604 = product of:
      0.07505651 = sum of:
        0.03772483 = weight(_text_:bibliographic in 305) [ClassicSimilarity], result of:
          0.03772483 = score(doc=305,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.21506234 = fieldWeight in 305, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=305)
        0.03733168 = product of:
          0.07466336 = sum of:
            0.07466336 = weight(_text_:data in 305) [ClassicSimilarity], result of:
              0.07466336 = score(doc=305,freq=18.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.52404076 = fieldWeight in 305, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=305)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper explores the history, uses cases, and future plans associated with availability of the Dewey Decimal Classification (DDC) system as linked data. Parts of the Dewey Decimal Classification (DDC) system have been available as linked data since 2009. Initial efforts included the DDC Summaries (the top three levels of the DDC) in eleven languages exposed as linked data in dewey.info. In 2010, the content of dewey.info was further extended by the addition of assignable numbers and captions from the Abridged Edition 14 data files in English, Italian, and Vietnamese. During 2012, we will add assignable numbers and captions from the latest full edition database, DDC 23. In addition to the "old friends" of different Dewey language versions, institutions such as the British Library and Deutsche Nationalbibliothek have made use of Dewey linked data in bibliographic records and authority files, and AGROVOC has linked to our data at a general level. We expect to extend our linked data network shortly to "new acquaintances" such as GeoNames, ISO 639-3 language codes, and Mathematics Subject Classification. In particular, we will examine the linking process to GeoNames as an example of cross-domain vocabulary alignment. In addition to linking plans, we report on use cases that facilitate machine-assisted categorization and support discovery in the Semantic Web environment.
    Content
    Text of presentations held at the international seminar "Global Interoperability and Linked Data in Libraries", Firenze, June 18-19, 2012.
  18. Gorman, M.: Bibliographic control or chaos : an agenda for national bibliographic services in the 21st century (2001) 0.03
    0.029876543 = product of:
      0.14938271 = sum of:
        0.14938271 = weight(_text_:bibliographic in 6899) [ClassicSimilarity], result of:
          0.14938271 = score(doc=6899,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.8516034 = fieldWeight in 6899, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.109375 = fieldNorm(doc=6899)
      0.2 = coord(1/5)
    
  19. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.03
    0.029792959 = product of:
      0.074482396 = sum of:
        0.06577167 = weight(_text_:readable in 53) [ClassicSimilarity], result of:
          0.06577167 = score(doc=53,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.23758507 = fieldWeight in 53, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
        0.008710725 = product of:
          0.01742145 = sum of:
            0.01742145 = weight(_text_:data in 53) [ClassicSimilarity], result of:
              0.01742145 = score(doc=53,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.12227618 = fieldWeight in 53, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=53)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    # Community action on individual ontologies We would like to call on all ontology maintainers and consumers to help us increase the average star rating of the web of ontologies by fixing and improving its ontologies. You can easily check an ontology at https://archivo.dbpedia.org/info. If you are an ontology maintainer just release a patched version - archivo will automatically pick it up 8 hours later. If you are a user of an ontology and want your consumed data to become FAIRer, please inform the ontology maintainer about the issues found with Archivo. The star rating is very basic and only requires fixing small things. However, theimpact on technical and legal usability can be immense.
    # How does Archivo work? Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus. # Archivo's mission Archivo's mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating. - Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology. - Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.
  20. Functional Requirements for Subject Authority Data (FRSAD) : a conceptual model (2009) 0.03
    0.029775355 = product of:
      0.074438386 = sum of:
        0.060359728 = weight(_text_:bibliographic in 3573) [ClassicSimilarity], result of:
          0.060359728 = score(doc=3573,freq=8.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 3573, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=3573)
        0.014078659 = product of:
          0.028157318 = sum of:
            0.028157318 = weight(_text_:data in 3573) [ClassicSimilarity], result of:
              0.028157318 = score(doc=3573,freq=4.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.19762816 = fieldWeight in 3573, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3573)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Subject access to information has been the predominant approach of users to satisfy their information needs. Research demonstrates that the integration of controlled vocabulary information with an information retrieval system helps users perform more effective subject searches. This integration becomes possible when subject authority data (information about subjects from authority files) are linked to bibliographic files and are made available to users. The purpose of authority control is to ensure consistency in representing a value-a name of a person, a place name, or a subject term-in the elements used as access points in information retrieval. For example, "World War, 1939-1945" has been established as an authorized subject heading in the Library of Congress Subject Headings (LCSH). When using LCSH, in cataloging or indexing, all publications about World War II are assigned the established heading regardless of whether a publication refers to the war as the "European War, 1939-1945", "Second World War", "World War 2", "World War II", "WWII", "World War Two", or "2nd World War." The synonymous expressions are referred to by the authorized heading. This ensures that all publications about World War II can be retrieved by and displayed under the same subject heading, either in an individual institution's own catalog or database or in a union catalog that contains bibliographic records from a number of individual libraries or databases. In almost all large bibliographic databases, authority control is achieved manually or semi-automatically by means of an authority file. The file contains records of headings or access points - names, titles, or subjects - that have been authorized for use in bibliographic records. In addition to ensuring consistency in subject representation, a subject authority record also records and maintains semantic relationships among subject terms and/or their labels. Records in a subject authority file are connected through semantic relationships, which may be expressed statically in subject authority records or generated dynamically according to the specific needs (e.g., presenting the broader and narrower terms) of printed or online display of thesauri, subject headings lists, classification schemes, and other knowledge organization systems.

Years

Languages

  • e 414
  • d 169
  • a 3
  • i 3
  • el 2
  • f 1
  • nl 1
  • More… Less…

Types

  • a 288
  • p 25
  • r 14
  • s 14
  • i 12
  • n 7
  • m 6
  • x 5
  • b 3
  • More… Less…

Themes