Search (55 results, page 2 of 3)

  • × theme_ss:"Normdateien"
  1. Buizza, P.: Bibliographic control and authority control from Paris principles to the present (2004) 0.01
    0.013004904 = product of:
      0.026009807 = sum of:
        0.026009807 = product of:
          0.052019615 = sum of:
            0.052019615 = weight(_text_:web in 5667) [ClassicSimilarity], result of:
              0.052019615 = score(doc=5667,freq=4.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.3059541 = fieldWeight in 5667, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5667)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Forty years ago the ICCP in Paris laid the foundations of international co-operation in descriptive cataloging without explicitly speaking of authority control. Some of the factors in the evolution of authority control are the development of catalogs (from card catalog to local automation, to today's OPAC on the Web) and services provided by libraries (from individual service to local users to system networks, to the World Wide Web), as well as international agreements on cataloging (from Paris Principles to the UBC programme, to the report on Mandatory data elements for internationally shared resource authority records). This evolution progressed from the principle of uniform heading to the definition of authority entries and records, and from the responsibility of national bibliographic agencies for the form of the names of their own authors to be shared internationally to the concept of authorized equivalent heading. Some issues of the present state are the persisting differences among national rules and the aim of respecting both local culture and language and international readability.
  2. Altenhöner, R.; Hannemann, J.; Kett, J.: Linked Data aus und für Bibliotheken : Rückgratstärkung im Semantic Web (2010) 0.01
    0.013004904 = product of:
      0.026009807 = sum of:
        0.026009807 = product of:
          0.052019615 = sum of:
            0.052019615 = weight(_text_:web in 4264) [ClassicSimilarity], result of:
              0.052019615 = score(doc=4264,freq=4.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.3059541 = fieldWeight in 4264, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4264)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Semantic web & linked data: Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis. Hrsg.: M. Ockenfeld
  3. Horn, M.E.: "Garbage" in, "refuse and refuse disposal" out : making the most of the subject authority file in the OPAC (2002) 0.01
    0.0123526165 = product of:
      0.024705233 = sum of:
        0.024705233 = product of:
          0.049410466 = sum of:
            0.049410466 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.049410466 = score(doc=156,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22
  4. Vellucci, S.L.: Metadata and authority control (2000) 0.01
    0.0123526165 = product of:
      0.024705233 = sum of:
        0.024705233 = product of:
          0.049410466 = sum of:
            0.049410466 = weight(_text_:22 in 180) [ClassicSimilarity], result of:
              0.049410466 = score(doc=180,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.2708308 = fieldWeight in 180, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=180)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22
  5. Franci, L.; Lucarelli, A.; Motta, M.; Rolle, M.: ¬The Nuovo Soggettario Thesaurus : structural features and Web application projects (2011) 0.01
    0.012261141 = product of:
      0.024522282 = sum of:
        0.024522282 = product of:
          0.049044564 = sum of:
            0.049044564 = weight(_text_:web in 1808) [ClassicSimilarity], result of:
              0.049044564 = score(doc=1808,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.2884563 = fieldWeight in 1808, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1808)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Kasprzik, A.; Kett, J.: Vorschläge für eine Weiterentwicklung der Sacherschließung und Schritte zur fortgesetzten strukturellen Aufwertung der GND (2018) 0.01
    0.01083742 = product of:
      0.02167484 = sum of:
        0.02167484 = product of:
          0.04334968 = sum of:
            0.04334968 = weight(_text_:web in 4599) [ClassicSimilarity], result of:
              0.04334968 = score(doc=4599,freq=4.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.25496176 = fieldWeight in 4599, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4599)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Aufgrund der fortgesetzten Publikationsflut stellt sich immer dringender die Frage, wie die Schwellen für die Titel- und Normdatenpflege gesenkt werden können - sowohl für die intellektuelle als auch die automatisierte Sacherschließung. Zu einer Verbesserung der Daten- und Arbeitsqualität in der Sacherschließung kann beigetragen werden a) durch eine flexible Visualisierung der Gemeinsamen Normdatei (GND) und anderer Wissensorganisationssysteme, so dass deren Graphstruktur intuitiv erfassbar wird, und b) durch eine investigative Analyse ihrer aktuellen Struktur und die Entwicklung angepasster automatisierter Methoden zur Ermittlung und Korrektur fehlerhafter Muster. Die Deutsche Nationalbibliothek (DNB) prüft im Rahmen des GND-Entwicklungsprogramms 2017-2021, welche Bedingungen für eine fruchtbare community-getriebene Open-Source-Entwicklung entsprechender Werkzeuge gegeben sein müssen. Weiteres Potential steckt in einem langfristigen Übergang zu einer Darstellung von Titel- und Normdaten in Beschreibungssprachen im Sinne des Semantic Web (RDF; OWL, SKOS). So profitiert die GND von der Interoperabilität mit anderen kontrollierten Vokabularen und von einer erleichterten Interaktion mit anderen Fach-Communities und kann umgekehrt auch außerhalb des Bibliothekswesens zu einem noch attraktiveren Wissensorganisationssystem werden. Darüber hinaus bieten die Ansätze aus dem Semantic Web die Möglichkeit, stärker formalisierte, strukturierende Satellitenvokabulare rund um die GND zu entwickeln. Daraus ergeben sich nicht zuletzt auch neue Perspektiven für die automatisierte Sacherschließung. Es wäre lohnend, näher auszuloten, wie und inwieweit semantisch-logische Verfahren den bestehenden Methodenmix bereichern können.
  7. Steeg, F.; Pohl, A.: ¬Ein Protokoll für den Datenabgleich im Web am Beispiel von OpenRefine und der Gemeinsamen Normdatei (GND) (2021) 0.01
    0.01083742 = product of:
      0.02167484 = sum of:
        0.02167484 = product of:
          0.04334968 = sum of:
            0.04334968 = weight(_text_:web in 367) [ClassicSimilarity], result of:
              0.04334968 = score(doc=367,freq=4.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.25496176 = fieldWeight in 367, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=367)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Normdaten spielen speziell im Hinblick auf die Qualität der Inhaltserschließung bibliografischer und archivalischer Ressourcen eine wichtige Rolle. Ein konkretes Ziel der Inhaltserschließung ist z. B., dass alle Werke über Hermann Hesse einheitlich zu finden sind. Hier bieten Normdaten eine Lösung, indem z. B. bei der Erschließung einheitlich die GND-Nummer 11855042X für Hermann Hesse verwendet wird. Das Ergebnis ist eine höhere Qualität der Inhaltserschließung vor allem im Sinne von Einheitlichkeit und Eindeutigkeit und, daraus resultierend, eine bessere Auffindbarkeit. Werden solche Entitäten miteinander verknüpft, z. B. Hermann Hesse mit einem seiner Werke, entsteht ein Knowledge Graph, wie ihn etwa Google bei der Inhaltserschließung des Web verwendet (Singhal 2012). Die Entwicklung des Google Knowledge Graph und das hier vorgestellte Protokoll sind historisch miteinander verbunden: OpenRefine wurde ursprünglich als Google Refine entwickelt, und die Funktionalität zum Abgleich mit externen Datenquellen (Reconciliation) wurde ursprünglich zur Einbindung von Freebase entwickelt, einer der Datenquellen des Google Knowledge Graph. Freebase wurde später in Wikidata integriert. Schon Google Refine wurde zum Abgleich mit Normdaten verwendet, etwa den Library of Congress Subject Headings (Hooland et al. 2013).
  8. Tillett, B.B.: Authority control : state of the art and new perspectives (2004) 0.01
    0.010728499 = product of:
      0.021456998 = sum of:
        0.021456998 = product of:
          0.042913996 = sum of:
            0.042913996 = weight(_text_:web in 5655) [ClassicSimilarity], result of:
              0.042913996 = score(doc=5655,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.25239927 = fieldWeight in 5655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5655)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Authority control is necessary for meeting the catalog's objectives of enabling users to find the works of an author and to collocate all works of a person or corporate body. This article looks at the current state of authority control as compared to the visions of the 1979 LITA (Library Information and Technology Association) Institutes and the 1984 Authority Control Interest Group. It explores a new view of IFLA's Universal Bibliographic Control (UBC) and a future vision of a virtual international authority file as a building block for the Semantic Web and reinforces the importance of authority control to improve the precision of searches of large databases or the Internet.
  9. Scheven, E.: ¬Die neue Thesaurusnorm ISO 25964 und die GND (2017) 0.01
    0.010728499 = product of:
      0.021456998 = sum of:
        0.021456998 = product of:
          0.042913996 = sum of:
            0.042913996 = weight(_text_:web in 3505) [ClassicSimilarity], result of:
              0.042913996 = score(doc=3505,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.25239927 = fieldWeight in 3505, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3505)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  10. Zhu, L.; Xu, A.; Deng, S.; Heng, G.; Li, X.: Entity management using Wikidata for cultural heritage information (2024) 0.01
    0.010728499 = product of:
      0.021456998 = sum of:
        0.021456998 = product of:
          0.042913996 = sum of:
            0.042913996 = weight(_text_:web in 975) [ClassicSimilarity], result of:
              0.042913996 = score(doc=975,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.25239927 = fieldWeight in 975, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=975)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Entity management in a Linked Open Data (LOD) environment is a process of associating a unique, persistent, and dereferenceable Uniform Resource Identifier (URI) with a single entity. It allows data from various sources to be reused and connected to the Web. It can help improve data quality and enable more efficient workflows. This article describes a semi-automated entity management project conducted by the "Wikidata: WikiProject Chinese Culture and Heritage Group," explores the challenges and opportunities in describing Chinese women poets and historical places in Wikidata, the largest crowdsourcing LOD platform in the world, and discusses lessons learned and future opportunities.
  11. Hubrich, J.: Input und Output der Schlagwortnormdatei (SWD) : Aufwand zur Sicherstellung der Qualität und Möglichkeiten des Nutzens im OPAC (2005) 0.01
    0.010587957 = product of:
      0.021175914 = sum of:
        0.021175914 = product of:
          0.042351827 = sum of:
            0.042351827 = weight(_text_:22 in 4183) [ClassicSimilarity], result of:
              0.042351827 = score(doc=4183,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.23214069 = fieldWeight in 4183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4183)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 1.2007 18:22:15
  12. Zedlitz, J.: Biographische Normdaten : ein Überblick (2017) 0.01
    0.010587957 = product of:
      0.021175914 = sum of:
        0.021175914 = product of:
          0.042351827 = sum of:
            0.042351827 = weight(_text_:22 in 3502) [ClassicSimilarity], result of:
              0.042351827 = score(doc=3502,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.23214069 = fieldWeight in 3502, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3502)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Archivar. 70(2017) H.1, S.22-25
  13. Scheven, E.: Qualitätssicherung in der GND (2021) 0.01
    0.010587957 = product of:
      0.021175914 = sum of:
        0.021175914 = product of:
          0.042351827 = sum of:
            0.042351827 = weight(_text_:22 in 314) [ClassicSimilarity], result of:
              0.042351827 = score(doc=314,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.23214069 = fieldWeight in 314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=314)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    23. 9.2021 19:12:22
  14. Dean, R.J.: FAST: development of simplified headings for metadata (2004) 0.01
    0.009195855 = product of:
      0.01839171 = sum of:
        0.01839171 = product of:
          0.03678342 = sum of:
            0.03678342 = weight(_text_:web in 5682) [ClassicSimilarity], result of:
              0.03678342 = score(doc=5682,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.21634221 = fieldWeight in 5682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5682)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Library of Congress Subject Headings schema (LCSH) is the most commonly used and widely accepted subject vocabulary for general application. It is the de facto universal controlled vocabulary and has been a model for developing subject heading systems by many countries. However, LCSH's complex syntax and rules for constructing headings restrict its application by requiring highly skilled personnel and limit the effectiveness of automated authority control. Recent trends, driven to a large extent by the rapid growth of the Web, are forcing changes in bibliographic control systems to make them easier to use, understand, and apply, and subject headings are no exception. The purpose of adapting the LCSH with a simplified syntax to create FAST (Faceted Application of Subject Terminology) headings is to retain the very rich vocabulary of LCSH while making the schema easier to understand, control, apply, and use. The schema maintains compatibility with LCSH--any valid Library of Congress subject heading can be converted to FAST headings.
  15. Virtuelle Normdatei (2008) 0.01
    0.009195855 = product of:
      0.01839171 = sum of:
        0.01839171 = product of:
          0.03678342 = sum of:
            0.03678342 = weight(_text_:web in 2673) [ClassicSimilarity], result of:
              0.03678342 = score(doc=2673,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.21634221 = fieldWeight in 2673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Die Deutsche Nationalbibliothek, die Bibliothèque nationale de France, die Library of Congress und das Online Computer Library Center (OCLC) sind übereingekommen, gemeinsam den »Virtual International Authority File« (VIAF), eine Virtuelle Internationale Normdatei, aufzubauen und fortzuentwickeln. Die einzelnen Normdateien sollen im VIAF virtuell zu einem gemeinsamen Normdaten-Service integriert werden, der den Zugang zu den Namen aller einbezogenen Normdateien bietet. Die Vereinbarung baut auf einem vorausgegangenen Forschungsprojekt auf, in dem die Deutsche Nationalbibliothek gemeinsam mit der Library of Congress und OCLC durch die Zusammenführung ihrer Personennamendateien nachgewiesen haben, dass der Aufbau eines Virtual International Authority File auch unter den Bedingungen großer Datenbestände machbar ist. Mit der neuen Kooperationsvereinbarung stößt die Bibliothèque nationale de France hinzu, und der VIAF wird um die französischen Normdaten erweitert. Langfristig zielt das VIAF-Projekt darauf ab, die Normdateien möglichst vieler Bibliotheken zu einem globalen VIAF-Service zu integrieren, der für die Nutzer im Web weltweit frei zugänglich ist."
  16. O'Neill, E.T.; Bennett, R.; Kammerer, K.: Using authorities to improve subject searches (2012) 0.01
    0.009195855 = product of:
      0.01839171 = sum of:
        0.01839171 = product of:
          0.03678342 = sum of:
            0.03678342 = weight(_text_:web in 310) [ClassicSimilarity], result of:
              0.03678342 = score(doc=310,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.21634221 = fieldWeight in 310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=310)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  17. Rotenberg, E.; Kushmerick, A.: ¬The author challenge : identification of self in the scholarly literature (2011) 0.01
    0.009195855 = product of:
      0.01839171 = sum of:
        0.01839171 = product of:
          0.03678342 = sum of:
            0.03678342 = weight(_text_:web in 1332) [ClassicSimilarity], result of:
              0.03678342 = score(doc=1332,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.21634221 = fieldWeight in 1332, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1332)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Considering the expansion of research output across the globe, along with the growing demand for quantitative tracking of research outcomes by government authorities and research institutions, the challenges of author identity are increasing. In recent years, a number of initiatives to help solve the author "name game" have been launched from all areas of the scholarly information market space. This article introduces the various author identification tools and services Thomson Reuters provides, including Distinct Author Sets and ResearcherID-which reflect a combination of automated clustering and author participation-as well as the use of other data types, such as grants and patents, to expand the universe of author identification. Industry-wide initiatives such as the Open Researcher and Contributor ID (ORCID) are also described. Future author-related developments in ResearcherID and Thomson Reuters Web of Knowledge are also included.
  18. Jahns, Y.: 20 years SWD : German subject authority data prepared for the future (2011) 0.01
    0.009195855 = product of:
      0.01839171 = sum of:
        0.01839171 = product of:
          0.03678342 = sum of:
            0.03678342 = weight(_text_:web in 1802) [ClassicSimilarity], result of:
              0.03678342 = score(doc=1802,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.21634221 = fieldWeight in 1802, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1802)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The German subject headings authority file - SWD - provides a terminologically controlled vocabulary, covering all fields of knowledge. The subject headings are determined by the German Rules for the Subject Catalogue. The authority file is produced and updated daily by participating libraries from around Germany, Austria and Switzerland. Over the last twenty years, it grew to an online-accessible database with about 550.000 headings. They are linked to other thesauri, also to French and English equivalents and with notations of the Dewey Decimal Classification. Thus, it allows multilingual access and searching in dispersed, heterogeneously indexed catalogues. The vocabulary is not only used for cataloguing library materials, but also web-resources and objects in archives and museums.
  19. Vukadin, A.: Development of a classification-oriented authority control : the experience of the National and University Library in Zagreb (2015) 0.01
    0.009195855 = product of:
      0.01839171 = sum of:
        0.01839171 = product of:
          0.03678342 = sum of:
            0.03678342 = weight(_text_:web in 2296) [ClassicSimilarity], result of:
              0.03678342 = score(doc=2296,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.21634221 = fieldWeight in 2296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2296)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper presents experiences and challenges encountered during the planning and creation of the Universal Decimal Classification (UDC) authority database in the National and University Library in Zagreb, Croatia. The project started in 2014 with the objective of facilitating classification data management, improving the indexing consistency at the institutional level and the machine readability of data for eventual sharing and re-use in the Web environment. The paper discusses the advantages and disadvantages of UDC, which is an analytico-synthetic classification scheme tending towards a more faceted structure, in regard to various aspects of authority control. This discussion represents the referential framework for the project. It determines the choice of elements to be included in the authority file, e.g. distinguishing between syntagmatic and paradigmatic combinations of subjects. It also determines the future lines of development, e.g. interlinking with the subject headings authority file in order to provide searching by verbal expressions.
  20. Byrum, J.D.: ¬The emerging global bibliographical network : the era of international standardization in the development of cataloging policy (2000) 0.01
    0.008823298 = product of:
      0.017646596 = sum of:
        0.017646596 = product of:
          0.03529319 = sum of:
            0.03529319 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
              0.03529319 = score(doc=190,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.19345059 = fieldWeight in 190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22

Years

Languages

  • e 32
  • d 20
  • a 1
  • More… Less…

Types

  • a 45
  • el 11
  • b 2
  • m 2
  • r 1
  • s 1
  • More… Less…