Search (85 results, page 1 of 5)

  • × theme_ss:"Normdateien"
  1. Buizza, P.: Bibliographic control and authority control from Paris principles to the present (2004) 0.03
    0.032969773 = product of:
      0.0879194 = sum of:
        0.038619664 = weight(_text_:wide in 5667) [ClassicSimilarity], result of:
          0.038619664 = score(doc=5667,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.29372054 = fieldWeight in 5667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5667)
        0.029630389 = weight(_text_:web in 5667) [ClassicSimilarity], result of:
          0.029630389 = score(doc=5667,freq=4.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.3059541 = fieldWeight in 5667, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5667)
        0.019669347 = weight(_text_:data in 5667) [ClassicSimilarity], result of:
          0.019669347 = score(doc=5667,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2096163 = fieldWeight in 5667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5667)
      0.375 = coord(3/8)
    
    Abstract
    Forty years ago the ICCP in Paris laid the foundations of international co-operation in descriptive cataloging without explicitly speaking of authority control. Some of the factors in the evolution of authority control are the development of catalogs (from card catalog to local automation, to today's OPAC on the Web) and services provided by libraries (from individual service to local users to system networks, to the World Wide Web), as well as international agreements on cataloging (from Paris Principles to the UBC programme, to the report on Mandatory data elements for internationally shared resource authority records). This evolution progressed from the principle of uniform heading to the definition of authority entries and records, and from the responsibility of national bibliographic agencies for the form of the names of their own authors to be shared internationally to the concept of authorized equivalent heading. Some issues of the present state are the persisting differences among national rules and the aim of respecting both local culture and language and international readability.
  2. Rotenberg, E.; Kushmerick, A.: ¬The author challenge : identification of self in the scholarly literature (2011) 0.03
    0.029715322 = product of:
      0.07924086 = sum of:
        0.038619664 = weight(_text_:wide in 1332) [ClassicSimilarity], result of:
          0.038619664 = score(doc=1332,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.29372054 = fieldWeight in 1332, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1332)
        0.020951848 = weight(_text_:web in 1332) [ClassicSimilarity], result of:
          0.020951848 = score(doc=1332,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.21634221 = fieldWeight in 1332, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1332)
        0.019669347 = weight(_text_:data in 1332) [ClassicSimilarity], result of:
          0.019669347 = score(doc=1332,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2096163 = fieldWeight in 1332, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1332)
      0.375 = coord(3/8)
    
    Abstract
    Considering the expansion of research output across the globe, along with the growing demand for quantitative tracking of research outcomes by government authorities and research institutions, the challenges of author identity are increasing. In recent years, a number of initiatives to help solve the author "name game" have been launched from all areas of the scholarly information market space. This article introduces the various author identification tools and services Thomson Reuters provides, including Distinct Author Sets and ResearcherID-which reflect a combination of automated clustering and author participation-as well as the use of other data types, such as grants and patents, to expand the universe of author identification. Industry-wide initiatives such as the Open Researcher and Contributor ID (ORCID) are also described. Future author-related developments in ResearcherID and Thomson Reuters Web of Knowledge are also included.
  3. Danowski, P.: Authority files and Web 2.0 : Wikipedia and the PND. An Example (2007) 0.02
    0.023010893 = product of:
      0.061362382 = sum of:
        0.03491975 = weight(_text_:web in 1291) [ClassicSimilarity], result of:
          0.03491975 = score(doc=1291,freq=8.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.36057037 = fieldWeight in 1291, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
        0.016391123 = weight(_text_:data in 1291) [ClassicSimilarity], result of:
          0.016391123 = score(doc=1291,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.17468026 = fieldWeight in 1291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
        0.010051507 = product of:
          0.020103013 = sum of:
            0.020103013 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
              0.020103013 = score(doc=1291,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.19345059 = fieldWeight in 1291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1291)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    More and more users index everything on their own in the web 2.0. There are services for links, videos, pictures, books, encyclopaedic articles and scientific articles. All these services are library independent. But must that really be? Can't libraries help with their experience and tools to make user indexing better? On the experience of a project from German language Wikipedia together with the German person authority files (Personen Namen Datei - PND) located at German National Library (Deutsche Nationalbibliothek) I would like to show what is possible. How users can and will use the authority files, if we let them. We will take a look how the project worked and what we can learn for future projects. Conclusions - Authority files can have a role in the web 2.0 - there must be an open interface/ service for retrieval - everything that is indexed on the net with authority files can be easy integrated in a federated search - O'Reilly: You have to found ways that your data get more important that more it will be used
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
    Object
    Web 2.0
  4. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.02
    0.022171604 = product of:
      0.088686414 = sum of:
        0.016391123 = weight(_text_:data in 3780) [ClassicSimilarity], result of:
          0.016391123 = score(doc=3780,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.17468026 = fieldWeight in 3780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3780)
        0.07229529 = sum of:
          0.05219228 = weight(_text_:mining in 3780) [ClassicSimilarity], result of:
            0.05219228 = score(doc=3780,freq=2.0), product of:
              0.16744171 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.029675366 = queryNorm
              0.31170416 = fieldWeight in 3780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3780)
          0.020103013 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
            0.020103013 = score(doc=3780,freq=2.0), product of:
              0.103918076 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.029675366 = queryNorm
              0.19345059 = fieldWeight in 3780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3780)
      0.25 = coord(2/8)
    
    Abstract
    Wir leben im 21. Jahrhundert, und vieles, was vor hundert und noch vor fünfzig Jahren als Science Fiction abgetan worden wäre, ist mittlerweile Realität. Raumsonden fliegen zum Mars, machen dort Experimente und liefern Daten zur Erde zurück. Roboter werden für Routineaufgaben eingesetzt, zum Beispiel in der Industrie oder in der Medizin. Digitalisierung, künstliche Intelligenz und automatisierte Verfahren sind kaum mehr aus unserem Alltag wegzudenken. Grundlage vieler Prozesse sind lernende Algorithmen. Die fortschreitende digitale Transformation ist global und umfasst alle Lebens- und Arbeitsbereiche: Wirtschaft, Gesellschaft und Politik. Sie eröffnet neue Möglichkeiten, von denen auch Bibliotheken profitieren. Der starke Anstieg digitaler Publikationen, die einen wichtigen und prozentual immer größer werdenden Teil des Kulturerbes darstellen, sollte für Bibliotheken Anlass sein, diese Möglichkeiten aktiv aufzugreifen und einzusetzen. Die Auswertbarkeit digitaler Inhalte, beispielsweise durch Text- and Data-Mining (TDM), und die Entwicklung technischer Verfahren, mittels derer Inhalte miteinander vernetzt und semantisch in Beziehung gesetzt werden können, bieten Raum, auch bibliothekarische Erschließungsverfahren neu zu denken. Daher beschäftigt sich die Deutsche Nationalbibliothek (DNB) seit einigen Jahren mit der Frage, wie sich die Prozesse bei der Erschließung von Medienwerken verbessern und maschinell unterstützen lassen. Sie steht dabei im regelmäßigen kollegialen Austausch mit anderen Bibliotheken, die sich ebenfalls aktiv mit dieser Fragestellung befassen, sowie mit europäischen Nationalbibliotheken, die ihrerseits Interesse an dem Thema und den Erfahrungen der DNB haben. Als Nationalbibliothek mit umfangreichen Beständen an digitalen Publikationen hat die DNB auch Expertise bei der digitalen Langzeitarchivierung aufgebaut und ist im Netzwerk ihrer Partner als kompetente Gesprächspartnerin geschätzt.
    Date
    19. 8.2017 9:24:22
  5. Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003) 0.02
    0.019453965 = product of:
      0.051877238 = sum of:
        0.016091526 = weight(_text_:wide in 1166) [ClassicSimilarity], result of:
          0.016091526 = score(doc=1166,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.122383565 = fieldWeight in 1166, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
        0.017459875 = weight(_text_:web in 1166) [ClassicSimilarity], result of:
          0.017459875 = score(doc=1166,freq=8.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.18028519 = fieldWeight in 1166, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
        0.018325834 = weight(_text_:data in 1166) [ClassicSimilarity], result of:
          0.018325834 = score(doc=1166,freq=10.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.19529848 = fieldWeight in 1166, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
      0.375 = coord(3/8)
    
    Abstract
    This article presents an overview of the LEAF project (Linking and Exploring Authority Files)1, which has set out to provide a framework for international, collaborative work in the sector of authority data with respect to authority control. Elaborating the virtues of authority control in today's Web environment is an almost futile exercise, since so much has been said and written about it in the last few years.2 The World Wide Web is generally understood to be poorly structured-both with regard to content and to locating required information. Highly structured databases might be viewed as small islands of precision within this chaotic environment. Though the Web in general or any particular structured database would greatly benefit from increased authority control, it should be noted that our following considerations only refer to authority control with regard to databases of "memory institutions" (i.e., libraries, archives, and museums). Moreover, when talking about authority records, we exclusively refer to personal name authority records that describe a specific person. Although different types of authority records could indeed be used in similar ways to the ones presented in this article, discussing those different types is outside the scope of both the LEAF project and this article. Personal name authority records-as are all other "authorities"-are maintained as separate records and linked to various kinds of descriptive records. Name authority records are usually either kept in independent databases or in separate tables in the database containing the descriptive records. This practice points at a crucial benefit: by linking any number of descriptive records to an authorized name record, the records related to this entity are collocated in the database. Variant forms of the authorized name are referenced in the authority records and thus ensure the consistency of the database while enabling search and retrieval operations that produce accurate results. On one hand, authority control may be viewed as a positive prerequisite of a consistent catalogue; on the other, the creation of new authority records is a very time consuming and expensive undertaking. As a consequence, various models of providing access to existing authority records have emerged: the Library of Congress and the French National Library (Bibliothèque nationale de France), for example, make their authority records available to all via a web-based search service.3 In Germany, the Personal Name Authority File (PND, Personennamendatei4) maintained by the German National Library (Die Deutsche Bibliothek, Frankfurt/Main) offers a different approach to shared access: within a closed network, participating institutions have online access to their pooled data. The number of recent projects and initiatives that have addressed the issue of authority control in one way or another is considerable.5 Two important current initiatives should be mentioned here: The Name Authority Cooperative (NACO) and Virtual International Authority File (VIAF).
    NACO was established in 1976 and is hosted by the Library of Congress. At the beginning of 2003, nearly 400 institutions were involved in this undertaking, including 43 institutions from outside the United States.6 Despite the enormous success of NACO and the impressive annual growth of the initiative, there are requirements for participation that form an obstacle for many institutions: they have to follow the Anglo-American Cataloguing Rules (AACR2) and employ the MARC217 data format. Participating institutions also have to belong to either OCLC (Online Computer Library Center) or RLG (Research Libraries Group) in order to be able to contribute records, and they have to provide a specified minimum number of authority records per year. A recent proof of concept project of the Library of Congress, OCLC and the German National Library-Virtual International Authority File (VIAF)8-will, in its first phase, test automatic linking of the records of the Library of Congress Name Authority File (LCNAF) and the German Personal Name Authority File by using matching algorithms and software developed by OCLC. The results are expected to form the basis of a "Virtual International Authority File". The project will then test the maintenance of the virtual authority file by employing the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)9 to harvest the metadata for new, updated, and deleted records. When using the "Virtual International Authority File" a cataloguer will be able to check the system to see whether the authority record he wants to establish already exists. The final phase of the project will test possibilities for displaying records in the preferred language and script of the end user. Currently, there are still some clear limitations associated with the ways in which authority records are used by memory institutions. One of the main problems has to do with limited access: generally only large institutions or those that are part of a library network have unlimited online access to permanently updated authority records. Smaller institutions outside these networks usually have to fall back on less efficient ways of obtaining authority data, or have no access at all. Cross-domain sharing of authority data between libraries, archives, museums and other memory institutions simply does not happen at present. Public users are, by and large, not even aware that such things as name authority records exist and are excluded from access to these information resources.
  6. Russell, B.M.; Spillane, J.L.: Using the Web for name authority work (2001) 0.02
    0.018486751 = product of:
      0.073947005 = sum of:
        0.0598749 = weight(_text_:web in 167) [ClassicSimilarity], result of:
          0.0598749 = score(doc=167,freq=12.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.6182494 = fieldWeight in 167, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=167)
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 167) [ClassicSimilarity], result of:
              0.028144216 = score(doc=167,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 167, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=167)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    While many catalogers are using the Web to find the information they need to perform authority work quickly and accurately, the full potential of the Web to assist catalogers in name authority work has yet to be realized. The ever-growing nature of the Web means that available information for creating personal name, corporate name, and other types of headings will increase. In this article, we examine ways in which simple and effective Web searching can save catalogers time and money in the process of authority work. In addition, questions involving evaluating authority information found on the Web are explored.
    Date
    10. 9.2000 17:38:22
  7. Haffner, A.: Internationalisierung der GND durch das Semantic Web (2012) 0.02
    0.017251467 = product of:
      0.06900587 = sum of:
        0.038649082 = weight(_text_:web in 318) [ClassicSimilarity], result of:
          0.038649082 = score(doc=318,freq=20.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.39907828 = fieldWeight in 318, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=318)
        0.030356785 = weight(_text_:data in 318) [ClassicSimilarity], result of:
          0.030356785 = score(doc=318,freq=14.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.32351238 = fieldWeight in 318, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.02734375 = fieldNorm(doc=318)
      0.25 = coord(2/8)
    
    Abstract
    Seit Bestehen der Menschheit sammelt der Mensch Informationen, seit Bestehen des Internets stellt der Mensch Informationen ins Web, seit Bestehen des Semantic Webs sollen auch Maschinen in die Lage versetzt werden mit diesen Informationen umzugehen. Das Bibliothekswesen ist einer der Sammler. Seit Jahrhunderten werden Kataloge und Bibliografien sowie Inventarnachweise geführt. Mit der Aufgabe des Zettelkatalogs hin zum Onlinekatalog wurde es Benutzern plötzlich möglich in Beständen komfortabel zu suchen. Durch die Bereitstellung von Daten aus dem Bibliothekswesen im Semantic Web sollen nicht nur die eigenen Katalogsysteme Zugriff auf diese Informationen erhalten, sondern jede beliebige Anwendung, die auf das Web zugreifen kann. Darüber hinaus ist die Vorstellung, dass sich die im Web befindenden Daten - in sofern möglich - miteinander verlinken und zu einem gigantischen semantischen Netz werden, das als ein großer Datenpool verwendet werden kann. Die Voraussetzung hierfür ist wie beim Übergang zum Onlinekatalog die Aufbereitung der Daten in einem passenden Format. Normdaten dienen im Bibliothekswesen bereits dazu eine Vernetzung der unterschiedlichen Bestände zu erlauben. Bei der Erschließung eines Buches wird nicht bloß gesagt, dass jemand, der Thomas Mann heißt, der Autor ist - es wird eine Verknüpfung vom Katalogisat zu dem Thomas Mann erzeugt, der am 6. Juni 1875 in Lübeck geboren und am 12. August 1955 in Zürich verstorben ist. Der Vorteil von Normdateneintragungen ist, dass sie zum eindeutigen Nachweis der Verfasserschaft oder Mitwirkung an einem Werk beitragen. Auch stehen Normdateneintragungen bereits allen Bibliotheken für die Nachnutzung bereit - der Schritt ins Semantic Web wäre somit die Öffnung der Normdaten für alle denkbaren Nutzergruppen.
    Die Gemeinsame Normdatei (GND) ist seit April 2012 die Datei, die die im deutschsprachigen Bibliothekswesen verwendeten Normdaten enthält. Folglich muss auf Basis dieser Daten eine Repräsentation für die Darstellung als Linked Data im Semantic Web etabliert werden. Neben der eigentlichen Bereitstellung von GND-Daten im Semantic Web sollen die Daten mit bereits als Linked Data vorhandenen Datenbeständen (DBpedia, VIAF etc.) verknüpft und nach Möglichkeit kompatibel sein, wodurch die GND einem internationalen und spartenübergreifenden Publikum zugänglich gemacht wird. Dieses Dokument dient vor allem zur Beschreibung, wie die GND-Linked-Data-Repräsentation entstand und dem Weg zur Spezifikation einer eignen Ontologie. Hierfür werden nach einer kurzen Einführung in die GND die Grundprinzipien und wichtigsten Standards für die Veröffentlichung von Linked Data im Semantic Web vorgestellt, um darauf aufbauend existierende Vokabulare und Ontologien des Bibliothekswesens betrachten zu können. Anschließend folgt ein Exkurs in das generelle Vorgehen für die Bereitstellung von Linked Data, wobei die so oft zitierte Open World Assumption kritisch hinterfragt und damit verbundene Probleme insbesondere in Hinsicht Interoperabilität und Nachnutzbarkeit aufgedeckt werden. Um Probleme der Interoperabilität zu vermeiden, wird den Empfehlungen der Library Linked Data Incubator Group [LLD11] gefolgt.
    Im Kapitel Anwendungsprofile als Basis für die Ontologieentwicklung wird die Spezifikation von Dublin Core Anwendungsprofilen kritisch betrachtet, um auszumachen wann und in welcher Form sich ihre Verwendung bei dem Vorhaben Bereitstellung von Linked Data anbietet. In den nachfolgenden Abschnitten wird die GND-Ontologie, welche als Standard für die Serialisierung von GND-Daten im Semantic Web dient, samt Modellierungsentscheidungen näher vorgestellt. Dabei wird insbesondere der Technik des Vocabulary Alignment eine prominente Position eingeräumt, da darin ein entscheidender Mechanismus zur Steigerung der Interoperabilität und Nachnutzbarkeit gesehen wird. Auch wird sich mit der Verlinkung zu externen Datensets intensiv beschäftigt. Hierfür wurden ausgewählte Datenbestände hinsichtlich ihrer Qualität und Aktualität untersucht und Empfehlungen für die Implementierung innerhalb des GND-Datenbestandes gegeben. Abschließend werden eine Zusammenfassung und ein Ausblick auf weitere Schritte gegeben.
  8. Tillett, B.B.: Complementarity of perspectives for resource descriptions (2015) 0.02
    0.016723264 = product of:
      0.066893056 = sum of:
        0.03024139 = weight(_text_:web in 2288) [ClassicSimilarity], result of:
          0.03024139 = score(doc=2288,freq=6.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.3122631 = fieldWeight in 2288, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2288)
        0.036651667 = weight(_text_:data in 2288) [ClassicSimilarity], result of:
          0.036651667 = score(doc=2288,freq=10.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.39059696 = fieldWeight in 2288, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2288)
      0.25 = coord(2/8)
    
    Abstract
    Bibliographic data is used to describe resources held in the collections of libraries, archives and museums. That data is mostly available on the Web today and mostly as linked data. Also on the Web are the controlled vocabulary systems of name authority files, like the Virtual International Authority File (VIAF), classification systems, and subject terms. These systems offer their own linked data to potentially help users find the information they want - whether at their local library or anywhere in the world that is willing to make their resources available. We have found it beneficial to merge authority data for names on a global level, as the entities are relatively clear. That is not true for subject concepts and terminology that have categorisation systems developed according to varying principles and schemes and are in multiple languages. Rather than requiring everyone in the world to use the same categorisation/classification system in the same language, we know that the Web offers us the opportunity to add descriptors assigned around the world using multiple systems from multiple perspectives to identify our resources. Those descriptors add value to refine searches, help users worldwide and share globally what each library does locally.
  9. Zhu, L.; Xu, A.; Deng, S.; Heng, G.; Li, X.: Entity management using Wikidata for cultural heritage information (2024) 0.02
    0.016047547 = product of:
      0.06419019 = sum of:
        0.024443826 = weight(_text_:web in 975) [ClassicSimilarity], result of:
          0.024443826 = score(doc=975,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.25239927 = fieldWeight in 975, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=975)
        0.039746363 = weight(_text_:data in 975) [ClassicSimilarity], result of:
          0.039746363 = score(doc=975,freq=6.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.42357713 = fieldWeight in 975, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=975)
      0.25 = coord(2/8)
    
    Abstract
    Entity management in a Linked Open Data (LOD) environment is a process of associating a unique, persistent, and dereferenceable Uniform Resource Identifier (URI) with a single entity. It allows data from various sources to be reused and connected to the Web. It can help improve data quality and enable more efficient workflows. This article describes a semi-automated entity management project conducted by the "Wikidata: WikiProject Chinese Culture and Heritage Group," explores the challenges and opportunities in describing Chinese women poets and historical places in Wikidata, the largest crowdsourcing LOD platform in the world, and discusses lessons learned and future opportunities.
  10. Altenhöner, R.; Hannemann, J.; Kett, J.: Linked Data aus und für Bibliotheken : Rückgratstärkung im Semantic Web (2010) 0.02
    0.015924674 = product of:
      0.063698694 = sum of:
        0.029630389 = weight(_text_:web in 4264) [ClassicSimilarity], result of:
          0.029630389 = score(doc=4264,freq=4.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.3059541 = fieldWeight in 4264, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4264)
        0.03406831 = weight(_text_:data in 4264) [ClassicSimilarity], result of:
          0.03406831 = score(doc=4264,freq=6.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.3630661 = fieldWeight in 4264, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4264)
      0.25 = coord(2/8)
    
    Abstract
    Die Deutsche Nationalbibliothek (DNB) hat damit begonnen, ihre Wissensbasis bestehend aus bibliografischen Daten einerseits, vor allem aber aus den Normdaten als Linked Data zu veröffentlichen. Ziel der DNB ist es, mit der Publikation der Daten als Tripel eine direkte Nutzung der gesamten nationalbibliografischen Daten und der Normdaten durch die Semantic-WebCommunity zu ermöglichen und damit ganz neue Nutzungsgruppen einzubinden. Gleichzeitig soll aber auch das Tor für einen neuen Weg der kooperativen Datennutzung aufgestoßen werden. Langfristiges Ziel ist es, Bibliotheken und andere kulturelle Einrichtungen als ein verlässliches Rückgrat des Webs der Daten zu etablieren.
    Source
    Semantic web & linked data: Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis. Hrsg.: M. Ockenfeld
  11. Vukadin, A.: Development of a classification-oriented authority control : the experience of the National and University Library in Zagreb (2015) 0.01
    0.012192126 = product of:
      0.048768505 = sum of:
        0.020951848 = weight(_text_:web in 2296) [ClassicSimilarity], result of:
          0.020951848 = score(doc=2296,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.21634221 = fieldWeight in 2296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2296)
        0.027816659 = weight(_text_:data in 2296) [ClassicSimilarity], result of:
          0.027816659 = score(doc=2296,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.29644224 = fieldWeight in 2296, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2296)
      0.25 = coord(2/8)
    
    Abstract
    The paper presents experiences and challenges encountered during the planning and creation of the Universal Decimal Classification (UDC) authority database in the National and University Library in Zagreb, Croatia. The project started in 2014 with the objective of facilitating classification data management, improving the indexing consistency at the institutional level and the machine readability of data for eventual sharing and re-use in the Web environment. The paper discusses the advantages and disadvantages of UDC, which is an analytico-synthetic classification scheme tending towards a more faceted structure, in regard to various aspects of authority control. This discussion represents the referential framework for the project. It determines the choice of elements to be included in the authority file, e.g. distinguishing between syntagmatic and paradigmatic combinations of subjects. It also determines the future lines of development, e.g. interlinking with the subject headings authority file in order to provide searching by verbal expressions.
  12. Wolverton, R.E.: Becoming an authority on authority control : an annotated bibliography of resources (2006) 0.01
    0.012160225 = product of:
      0.0486409 = sum of:
        0.03456879 = weight(_text_:web in 120) [ClassicSimilarity], result of:
          0.03456879 = score(doc=120,freq=4.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.35694647 = fieldWeight in 120, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=120)
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 120) [ClassicSimilarity], result of:
              0.028144216 = score(doc=120,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 120, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=120)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Authority control has long been an important part of the cataloging process. However, few studies have been conducted examining how librarians learn about it. Research conducted to date suggests that many librarians learn about authority control on the job rather than in formal classes. To offer an introduction to authority control information for librarians, an annotated bibliography is provided. It includes monographs, articles and papers, electronic discussion groups, Web sites related to professional conferences, additional Web sites related to authority control, and training offered through the Name Authority Cooperative Program and the Subject Authority Cooperative Program. A summary of possible future trends in authority control is also provided.
    Date
    10. 9.2000 17:38:22
  13. Vellucci, S.L.: Metadata and authority control (2000) 0.01
    0.01163122 = product of:
      0.04652488 = sum of:
        0.03245277 = weight(_text_:data in 180) [ClassicSimilarity], result of:
          0.03245277 = score(doc=180,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.34584928 = fieldWeight in 180, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=180)
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 180) [ClassicSimilarity], result of:
              0.028144216 = score(doc=180,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 180, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=180)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    A variety of information communities have developed metadata schemes to meet the needs of their own users. The ability of libraries to incorporate and use multiple metadata schemes in current library systems will depend on the compatibility of imported data with existing catalog data. Authority control will play an important role in metadata interoperability. In this article, I discuss factors for successful authority control in current library catalogs, which include operation in a well-defined and bounded universe, application of principles and standard practices to access point creation, reference to authoritative lists, and bibliographic record creation by highly trained individuals. Metadata characteristics and environmental models are examined and the likelihood of successful authority control is explored for a variety of metadata environments.
    Date
    10. 9.2000 17:38:22
  14. Jahns, Y.: 20 years SWD : German subject authority data prepared for the future (2011) 0.01
    0.010155299 = product of:
      0.040621195 = sum of:
        0.020951848 = weight(_text_:web in 1802) [ClassicSimilarity], result of:
          0.020951848 = score(doc=1802,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.21634221 = fieldWeight in 1802, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1802)
        0.019669347 = weight(_text_:data in 1802) [ClassicSimilarity], result of:
          0.019669347 = score(doc=1802,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2096163 = fieldWeight in 1802, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1802)
      0.25 = coord(2/8)
    
    Abstract
    The German subject headings authority file - SWD - provides a terminologically controlled vocabulary, covering all fields of knowledge. The subject headings are determined by the German Rules for the Subject Catalogue. The authority file is produced and updated daily by participating libraries from around Germany, Austria and Switzerland. Over the last twenty years, it grew to an online-accessible database with about 550.000 headings. They are linked to other thesauri, also to French and English equivalents and with notations of the Dewey Decimal Classification. Thus, it allows multilingual access and searching in dispersed, heterogeneously indexed catalogues. The vocabulary is not only used for cataloguing library materials, but also web-resources and objects in archives and museums.
  15. Wang, S.; Koopman, R.: Second life for authority records (2015) 0.01
    0.009855337 = product of:
      0.03942135 = sum of:
        0.01854444 = weight(_text_:data in 2303) [ClassicSimilarity], result of:
          0.01854444 = score(doc=2303,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.19762816 = fieldWeight in 2303, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=2303)
        0.02087691 = product of:
          0.04175382 = sum of:
            0.04175382 = weight(_text_:mining in 2303) [ClassicSimilarity], result of:
              0.04175382 = score(doc=2303,freq=2.0), product of:
                0.16744171 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.029675366 = queryNorm
                0.24936332 = fieldWeight in 2303, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2303)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Authority control is a standard practice in the library community that provides consistent, unique, and unambiguous reference to entities such as persons, places, concepts, etc. The ideal way of referring to authority records through unique identifiers is in line with the current linked data principle. When presenting a bibliographic record, the linked authority records are expanded with the authoritative information. This way, any update in the authority records will not affect the indexing of the bibliographic records. The structural information in the authority files can also be leveraged to expand the user's query to retrieve bibliographic records associated with all the variations, narrower terms or related terms. However, in many digital libraries, especially largescale aggregations such as WorldCat and Europeana, name strings are often used instead of authority record identifiers. This is also partly due to the lack of global authority records that are valid across countries and cultural heritage domains. But even when there are global authority systems, they are not applied at scale. For example, in WorldCat, only 15% of the records have DDC and 3% have UDC codes; less than 40% of the records have one or more topical terms catalogued in the 650 MARC field, many of which are too general (such as "sports" or "literature") to be useful for retrieving bibliographic records. Therefore, when a user query is based on a Dewey code, the results usually have high precision but the recall is much lower than it should be; and, a search on a general topical term returns millions of hits without being even complete. All these practices make it difficult to leverage the key benefits of authority files. This is also true for authority files that have been transformed into linked data and enriched with mapping information. There are practical reasons for using name strings instead of identifiers. One is the indexing and query response. The future infrastructure design should take the performance into account while embracing the benefit of linking instead of copying, without introducing extra complexity to users. Notwithstanding all the restrictions, we argue that largescale aggregations also bring new opportunities for better exploiting the benefits of authority records. It is possible to use machine learning techniques to automatically link bibliographic records to authority records based on the manual input of cataloguers. Text mining and visualization techniques can offer a contextual view of authority records, which in return can be used to retrieve missing or mis-catalogued records. In this talk, we will describe such opportunities in more detail.
  16. Hickey, T.B.; Toves, J.; O'Neill, E.T.: NACO normalization : a detailed examination of the authority file comparison rules (2006) 0.01
    0.009628983 = product of:
      0.038515933 = sum of:
        0.024443826 = weight(_text_:web in 5760) [ClassicSimilarity], result of:
          0.024443826 = score(doc=5760,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.25239927 = fieldWeight in 5760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5760)
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 5760) [ClassicSimilarity], result of:
              0.028144216 = score(doc=5760,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 5760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5760)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Normalization rules are essential for interoperability between bibliographic systems. In the process of working with Name Authority Cooperative Program (NACO) authority files to match records with Functional Requirements for Bibliographic Records (FRBR) and developing the Faceted Application of Subject Terminology (FAST) subject heading schema, the authors found inconsistencies in independently created NACO normalization implementations. Investigating these, the authors found ambiguities in the NACO standard that need resolution, and came to conclusions on how the procedure could be simplified with little impact on matching headings. To encourage others to test their software for compliance with the current rules, the authors have established a Web site that has test files and interactive services showing their current implementation.
    Date
    10. 9.2000 17:38:22
  17. Niesner, S.: ¬Die Nutzung bibliothekarischer Normdaten im Web am Beispiel von VIAF und Wikipedia (2015) 0.01
    0.0074075973 = product of:
      0.059260778 = sum of:
        0.059260778 = weight(_text_:web in 1763) [ClassicSimilarity], result of:
          0.059260778 = score(doc=1763,freq=4.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.6119082 = fieldWeight in 1763, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=1763)
      0.125 = coord(1/8)
    
    Abstract
    Bibliothekarische Normdaten für Personen lassen sich im Web sinnvoll einsetzen.
  18. Kimura, M.: ¬A comparison of recorded authority data elements and the RDA Framework in Chinese character cultures (2015) 0.01
    0.00702623 = product of:
      0.05620984 = sum of:
        0.05620984 = weight(_text_:data in 2619) [ClassicSimilarity], result of:
          0.05620984 = score(doc=2619,freq=12.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.59902847 = fieldWeight in 2619, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2619)
      0.125 = coord(1/8)
    
    Abstract
    To investigate which authority data elements are recorded by libraries in the Chinese character cultural sphere (e.g., Japan, Mainland China, Hong Kong, Taiwan, South Korea, and Vietnam), data elements recorded by each library were examined and compared to authority data elements defined in the standard Resource Description and Access (RDA) design. Recommendations were then made to libraries within this cultural sphere to improve and internationally standardize their authority data. In addition, suggestions are provided to modify RDA in an effort to increase compatibility with authority data in the Chinese character cultural sphere.
  19. WebGND 0.01
    0.00698395 = product of:
      0.0558716 = sum of:
        0.0558716 = weight(_text_:web in 3877) [ClassicSimilarity], result of:
          0.0558716 = score(doc=3877,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.5769126 = fieldWeight in 3877, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.125 = fieldNorm(doc=3877)
      0.125 = coord(1/8)
    
    Abstract
    Eine frei verfügbare Web-Datenbank für die Einträge der GND.
  20. Provost, A. Le; Nicolas, .: IdRef, Paprika and Qualinka : atoolbox for authority data quality and interoperability (2020) 0.01
    0.0064140414 = product of:
      0.05131233 = sum of:
        0.05131233 = weight(_text_:data in 1076) [ClassicSimilarity], result of:
          0.05131233 = score(doc=1076,freq=10.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.5468357 = fieldWeight in 1076, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1076)
      0.125 = coord(1/8)
    
    Abstract
    Authority data has always been at the core of library catalogues. Today, authority data is reference data on a wider scale. The former authorities of the "Sudoc" union catalogue mutated into "IdRef", a read/write platform of open data and services which seeks to become a national supplier of reliable identifiers for French universities. To support their dissemination and comply with high quality standards, Paprika and Qualinka have been added to our toolbox, to expedite the massive and secure linking of scientific publications to IdRef authorities.

Years

Languages

  • e 60
  • d 22
  • a 1
  • More… Less…

Types

  • a 72
  • el 16
  • m 3
  • b 2
  • r 1
  • s 1
  • More… Less…