Search (17 results, page 1 of 1)

  • × theme_ss:"Normdateien"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. O'Neill, E.T.; Bennett, R.; Kammerer, K.: Using authorities to improve subject searches (2012) 0.04
    0.037249055 = product of:
      0.07449811 = sum of:
        0.03490599 = weight(_text_:web in 310) [ClassicSimilarity], result of:
          0.03490599 = score(doc=310,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=310)
        0.03959212 = weight(_text_:search in 310) [ClassicSimilarity], result of:
          0.03959212 = score(doc=310,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=310)
      0.5 = coord(2/4)
    
    Abstract
    Authority files have played an important role in improving the quality of indexing and subject cataloging. Although authorities can significantly improve search by increasing the number of access points, they are rarely an integral part of the information retrieval process, particularly end-users searches. A retrieval prototype, searchFAST, was developed to test the feasibility of using an authority file as an index to bibliographic records. searchFAST uses FAST (Faceted Application of Subject Terminology) as an index to OCLC's WorldCat.org bibliographic database. The searchFAST methodology complements, rather than replaces, existing WorldCat.org access. The bibliographic file is searched indirectly; first the authority file is searched to identify appropriate subject headings, then the headings are used to retrieve the matching bibliographic records. The prototype demonstrates the effectiveness and practicality of using an authority file as an index. Searching the authority file leverages authority control work by increasing the number of access points while supporting a simple interface designed for end-users.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  2. Niesner, S.: ¬Die Nutzung bibliothekarischer Normdaten im Web am Beispiel von VIAF und Wikipedia (2015) 0.02
    0.024682263 = product of:
      0.09872905 = sum of:
        0.09872905 = weight(_text_:web in 1763) [ClassicSimilarity], result of:
          0.09872905 = score(doc=1763,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.6119082 = fieldWeight in 1763, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=1763)
      0.25 = coord(1/4)
    
    Abstract
    Bibliothekarische Normdaten für Personen lassen sich im Web sinnvoll einsetzen.
  3. Ilik, V.: Cataloger makeover : creating non-MARC name authorities (2015) 0.01
    0.014397987 = product of:
      0.05759195 = sum of:
        0.05759195 = weight(_text_:web in 1884) [ClassicSimilarity], result of:
          0.05759195 = score(doc=1884,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.35694647 = fieldWeight in 1884, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1884)
      0.25 = coord(1/4)
    
    Abstract
    This article shares a vision of the enterprise of cataloging and the role of catalogers and metadata librarians in the twenty-first century. The revolutionary opportunities now presented by Semantic Web technologies liberate catalogers from their historically analog-based static world, re-conceptualize it, and transform it into a world of high dimensionality and fluidity. By presenting illustrative examples of innovative metadata creation and manipulation, such as non-MARC name authority records, we seek to contribute to the libraries' mission with innovative projects that enable discovery, development, communication, learning, and creativity, and hold promise to exceed users' expectations.
    Theme
    Semantic Web
  4. Tillett, B.B.: Complementarity of perspectives for resource descriptions (2015) 0.01
    0.012595614 = product of:
      0.050382458 = sum of:
        0.050382458 = weight(_text_:web in 2288) [ClassicSimilarity], result of:
          0.050382458 = score(doc=2288,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3122631 = fieldWeight in 2288, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2288)
      0.25 = coord(1/4)
    
    Abstract
    Bibliographic data is used to describe resources held in the collections of libraries, archives and museums. That data is mostly available on the Web today and mostly as linked data. Also on the Web are the controlled vocabulary systems of name authority files, like the Virtual International Authority File (VIAF), classification systems, and subject terms. These systems offer their own linked data to potentially help users find the information they want - whether at their local library or anywhere in the world that is willing to make their resources available. We have found it beneficial to merge authority data for names on a global level, as the entities are relatively clear. That is not true for subject concepts and terminology that have categorisation systems developed according to varying principles and schemes and are in multiple languages. Rather than requiring everyone in the world to use the same categorisation/classification system in the same language, we know that the Web offers us the opportunity to add descriptors assigned around the world using multiple systems from multiple perspectives to identify our resources. Those descriptors add value to refine searches, help users worldwide and share globally what each library does locally.
  5. Altenhöner, R.; Hannemann, J.; Kett, J.: Linked Data aus und für Bibliotheken : Rückgratstärkung im Semantic Web (2010) 0.01
    0.012341131 = product of:
      0.049364526 = sum of:
        0.049364526 = weight(_text_:web in 4264) [ClassicSimilarity], result of:
          0.049364526 = score(doc=4264,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3059541 = fieldWeight in 4264, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4264)
      0.25 = coord(1/4)
    
    Source
    Semantic web & linked data: Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis. Hrsg.: M. Ockenfeld
  6. Franci, L.; Lucarelli, A.; Motta, M.; Rolle, M.: ¬The Nuovo Soggettario Thesaurus : structural features and Web application projects (2011) 0.01
    0.0116353305 = product of:
      0.046541322 = sum of:
        0.046541322 = weight(_text_:web in 1808) [ClassicSimilarity], result of:
          0.046541322 = score(doc=1808,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2884563 = fieldWeight in 1808, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1808)
      0.25 = coord(1/4)
    
  7. Kasprzik, A.; Kett, J.: Vorschläge für eine Weiterentwicklung der Sacherschließung und Schritte zur fortgesetzten strukturellen Aufwertung der GND (2018) 0.01
    0.010284277 = product of:
      0.041137107 = sum of:
        0.041137107 = weight(_text_:web in 4599) [ClassicSimilarity], result of:
          0.041137107 = score(doc=4599,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25496176 = fieldWeight in 4599, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4599)
      0.25 = coord(1/4)
    
    Abstract
    Aufgrund der fortgesetzten Publikationsflut stellt sich immer dringender die Frage, wie die Schwellen für die Titel- und Normdatenpflege gesenkt werden können - sowohl für die intellektuelle als auch die automatisierte Sacherschließung. Zu einer Verbesserung der Daten- und Arbeitsqualität in der Sacherschließung kann beigetragen werden a) durch eine flexible Visualisierung der Gemeinsamen Normdatei (GND) und anderer Wissensorganisationssysteme, so dass deren Graphstruktur intuitiv erfassbar wird, und b) durch eine investigative Analyse ihrer aktuellen Struktur und die Entwicklung angepasster automatisierter Methoden zur Ermittlung und Korrektur fehlerhafter Muster. Die Deutsche Nationalbibliothek (DNB) prüft im Rahmen des GND-Entwicklungsprogramms 2017-2021, welche Bedingungen für eine fruchtbare community-getriebene Open-Source-Entwicklung entsprechender Werkzeuge gegeben sein müssen. Weiteres Potential steckt in einem langfristigen Übergang zu einer Darstellung von Titel- und Normdaten in Beschreibungssprachen im Sinne des Semantic Web (RDF; OWL, SKOS). So profitiert die GND von der Interoperabilität mit anderen kontrollierten Vokabularen und von einer erleichterten Interaktion mit anderen Fach-Communities und kann umgekehrt auch außerhalb des Bibliothekswesens zu einem noch attraktiveren Wissensorganisationssystem werden. Darüber hinaus bieten die Ansätze aus dem Semantic Web die Möglichkeit, stärker formalisierte, strukturierende Satellitenvokabulare rund um die GND zu entwickeln. Daraus ergeben sich nicht zuletzt auch neue Perspektiven für die automatisierte Sacherschließung. Es wäre lohnend, näher auszuloten, wie und inwieweit semantisch-logische Verfahren den bestehenden Methodenmix bereichern können.
  8. Scheven, E.: ¬Die neue Thesaurusnorm ISO 25964 und die GND (2017) 0.01
    0.010180915 = product of:
      0.04072366 = sum of:
        0.04072366 = weight(_text_:web in 3505) [ClassicSimilarity], result of:
          0.04072366 = score(doc=3505,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 3505, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3505)
      0.25 = coord(1/4)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  9. Rotenberg, E.; Kushmerick, A.: ¬The author challenge : identification of self in the scholarly literature (2011) 0.01
    0.008726497 = product of:
      0.03490599 = sum of:
        0.03490599 = weight(_text_:web in 1332) [ClassicSimilarity], result of:
          0.03490599 = score(doc=1332,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 1332, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1332)
      0.25 = coord(1/4)
    
    Abstract
    Considering the expansion of research output across the globe, along with the growing demand for quantitative tracking of research outcomes by government authorities and research institutions, the challenges of author identity are increasing. In recent years, a number of initiatives to help solve the author "name game" have been launched from all areas of the scholarly information market space. This article introduces the various author identification tools and services Thomson Reuters provides, including Distinct Author Sets and ResearcherID-which reflect a combination of automated clustering and author participation-as well as the use of other data types, such as grants and patents, to expand the universe of author identification. Industry-wide initiatives such as the Open Researcher and Contributor ID (ORCID) are also described. Future author-related developments in ResearcherID and Thomson Reuters Web of Knowledge are also included.
  10. Jahns, Y.: 20 years SWD : German subject authority data prepared for the future (2011) 0.01
    0.008726497 = product of:
      0.03490599 = sum of:
        0.03490599 = weight(_text_:web in 1802) [ClassicSimilarity], result of:
          0.03490599 = score(doc=1802,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 1802, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1802)
      0.25 = coord(1/4)
    
    Abstract
    The German subject headings authority file - SWD - provides a terminologically controlled vocabulary, covering all fields of knowledge. The subject headings are determined by the German Rules for the Subject Catalogue. The authority file is produced and updated daily by participating libraries from around Germany, Austria and Switzerland. Over the last twenty years, it grew to an online-accessible database with about 550.000 headings. They are linked to other thesauri, also to French and English equivalents and with notations of the Dewey Decimal Classification. Thus, it allows multilingual access and searching in dispersed, heterogeneously indexed catalogues. The vocabulary is not only used for cataloguing library materials, but also web-resources and objects in archives and museums.
  11. Vukadin, A.: Development of a classification-oriented authority control : the experience of the National and University Library in Zagreb (2015) 0.01
    0.008726497 = product of:
      0.03490599 = sum of:
        0.03490599 = weight(_text_:web in 2296) [ClassicSimilarity], result of:
          0.03490599 = score(doc=2296,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 2296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2296)
      0.25 = coord(1/4)
    
    Abstract
    The paper presents experiences and challenges encountered during the planning and creation of the Universal Decimal Classification (UDC) authority database in the National and University Library in Zagreb, Croatia. The project started in 2014 with the objective of facilitating classification data management, improving the indexing consistency at the institutional level and the machine readability of data for eventual sharing and re-use in the Web environment. The paper discusses the advantages and disadvantages of UDC, which is an analytico-synthetic classification scheme tending towards a more faceted structure, in regard to various aspects of authority control. This discussion represents the referential framework for the project. It determines the choice of elements to be included in the authority file, e.g. distinguishing between syntagmatic and paradigmatic combinations of subjects. It also determines the future lines of development, e.g. interlinking with the subject headings authority file in order to provide searching by verbal expressions.
  12. Francu, V.; Dediu, L.-I.: TinREAD - an integrative solution for subject authority control (2015) 0.01
    0.008248359 = product of:
      0.032993436 = sum of:
        0.032993436 = weight(_text_:search in 2297) [ClassicSimilarity], result of:
          0.032993436 = score(doc=2297,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 2297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2297)
      0.25 = coord(1/4)
    
    Abstract
    The paper introduces TinREAD (The Information Navigator for Readers), an integrated library system produced by IME Romania. The main feature of interest is the way TinREAD can handle a classification-based thesaurus in which verbal index terms are mapped to classification notations. It supports subject authority control interlinking the authority files (subject headings and UDC system). Authority files are used for indexing consistency. Although it is said that intellectual indexing is, unlike automated indexing, both subjective and inconsistent, TinREAD is using intellectual indexing as input (the UDC notations assigned to documents) for the automated indexing resulting from the implementation of a thesaurus structure based on UDC. Each UDC notation is represented by a UNIMARC subject heading record as authority data. One classification notation can be used to search simultaneously into more than one corresponding thesaurus. This way natural language terms are used in indexing and, at the same time, the link with the corresponding classification notation is kept. Additionally, the system can also manage multilingual data for the authority files. This, together with other characteristics of TinREAD are largely discussed and illustrated in the paper. Problems encountered and possible solutions to tackle them are shown.
  13. O'Neill, E.T.; Bennett, R.; Kammerer, K.: Using authorities to improve subject searches (2014) 0.01
    0.0072720814 = product of:
      0.029088326 = sum of:
        0.029088326 = weight(_text_:web in 1970) [ClassicSimilarity], result of:
          0.029088326 = score(doc=1970,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 1970, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1970)
      0.25 = coord(1/4)
    
    Footnote
    Contribution in a special issue "Beyond libraries: Subject metadata in the digital environment and Semantic Web" - Enthält Beiträge der gleichnamigen IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn.
  14. Sandner, M.: Neues aus der Kommission für Sacherschließung : Das neue Tool "NSW online" (2011) 0.01
    0.0071989936 = product of:
      0.028795974 = sum of:
        0.028795974 = weight(_text_:web in 4529) [ClassicSimilarity], result of:
          0.028795974 = score(doc=4529,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.17847323 = fieldWeight in 4529, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4529)
      0.25 = coord(1/4)
    
    Content
    "Die "Liste der fachlichen Nachschlagewerke zu den Normdateien" (NSW-Liste) stellt mit ihren derzeit rund 1.660 Einträgen ein verbindliches Arbeitsinstrument für die tägliche Praxis in der kooperativen Normdaten-pflege des deutschsprachigen Raumes, speziell für die Terminologiearbeit in der bibliothekarischen Sacherschließung dar. In jedem Normdatensatz der Schlagwortnormdatei (SWD) werden für den Nachweis und die Begründung der Ansetzungs- und Verweisungsformen eines Deskriptors im Feld "Quelle" Referenzwerke aus der so genannten Prioritätenliste (Rangfolge der Nachschlagewerke), darüber hinaus aus der gesamten NSW-Liste, festgehalten und normiert abgekürzt. In gedruckter Form erscheint diese von der Deutschen Nationalbibliothek (DNB) regelmäßig aktualisierte Liste jährlich mit einem Änderungsdienst (Änderungen, Neuauflagen; Neuaufnahmen) und steht seit einigen Jahren auch elektronisch abrufbar bereit. Dennoch ist sie "in die Jahre" gekommen. Eine verbesserte Form dieser Liste war ein langjähriges Desiderat für die Neuansetzungspraxis in der SWD. Erst eine Projektarbeit im Rahmen des ULG bot 2008/2009 in Wien die Gelegenheit, solch einem elektronischen Hilfsmittel entscheidend näher zu kommen. Das Projektergebnis war praxistauglich und wurde 2010 von der UB Wien in ein Content Management System (CMS) eingebettet. Zahlreiche Tests und funktionelle Anpassungen sowie eine genaue Durchsicht des Grunddatenbestandes und aller Links in den Katalog des Pilotverbundes (OBV) waren noch nötig, und auch die erste Aktualisierung nach der Druckausgabe 2010 führten wir durch, bevor wir im Herbst 2010 der Fachöffentlichkeit eine Beta-Version vorstellen konnten. Seither steht die Suche im NSW-Tool als Alternative zur Benützung der Druckausgabe allen frei zur Verfügung: http://www.univie.ac.at/nsw/. Sämtliche SWD-Kooperationspartner-Redaktionen und die Lokalen Redaktionen (ÖSWD-LRs) des Österreichischen Bibliothekenverbundes (OBV) können über das Web-Frontend von "NSW online" ihre Wünsche an die Redaktion der NSW-Liste (Fachabteilung SE, DNB) direkt im Tool deponieren (Korrekturanträge sowie Vorschläge zur Aufnahme fehlender oder neuer Nachschlagewerke) und frei im Internet zugängliche Volltexte zu den bereits in der Liste vorhandenen Titeln verlinken (Erstanmeldung über den Webmaster: via Hilfe-Seite im Tool).
    Nur die Verbundbibliotheken des OBV können überdies ihre zu den Nachschlagewerken passenden Bestände in Aleph hinzufügen, ggf. selbst via Web-Frontend im Tool einen Link zum eigenen Verbundkatalog neu anlegen, und insbesondere ihre lokal verfügbaren elektronischen Volltexte anbinden3. Im Backend werden neue Datensätze angelegt, vorhandene Einträge korrigiert, redaktionelle Kommentare platziert und Korrekturanträge abgeschöpft sowie Neuauflagen für die richtige Anzeige und Verknüpfung, etwa mit der Rangfolgeliste, zu ihrem "Anker-Datensatz" umgelenkt. Außerdem werden hier HTML-Seiten wie Hilfetext, Rangfolgeliste u. ä. gepflegt. - Zum Administrationsinterface haben nur der Webmaster der UB Wien, die SWD-Zentralredaktion des OBV sowie die Fachabteilung SE der DNB Zugang. (Nicht nur) Sacherschließerinnen können das Tool mit all seinen Vorteilen nutzen und dennoch in gewohnter Weise vorgehen, wenn sie nach Quellen für ihre Neuansetzungen suchen wollen und darin recherchieren müssen, denn die Struktur des Tools folgt dem Aufbau der Druckfassung. Es empfiehlt sich, im jeweiligen Bibliothekssystem für SWD und PND zum Feld Quelle einen Hypertextlink im Erfassungsmodul anzubringen. Die Normdateiarbeit ist komplex und anspruchsvoll. Die Einhaltung der für alle Neuansetzungen verbindlichen Rangfolge wird mit dem Tool und seiner praxisorientierten Aufbereitung ganz entscheidend erleichtert, was von Beginn an zu einer hohen Qualität jedes Normdatensatzes verhilft. Den größten Zeitgewinn in der täglichen Praxis bringt der sofortige Zugriff auf verlinkte Volltexte. - Angesichts des zunehmenden multilateralen Datentausches bei gleichzeitiger dramatischer Verknappung personeller Ressourcen trotz eines erheblichen Anstiegs des inhaltlich zu erschließenden Literaturaufkommens wird dies im Workflow des vor kurzem eingeführten Online-Redaktionsverfahrens (ONR) für Normdateien der wohl nachhaltigste Effekt von "NSW online" sein."
  15. Wang, S.; Koopman, R.: Second life for authority records (2015) 0.01
    0.0065986873 = product of:
      0.026394749 = sum of:
        0.026394749 = weight(_text_:search in 2303) [ClassicSimilarity], result of:
          0.026394749 = score(doc=2303,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.15360467 = fieldWeight in 2303, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=2303)
      0.25 = coord(1/4)
    
    Abstract
    Authority control is a standard practice in the library community that provides consistent, unique, and unambiguous reference to entities such as persons, places, concepts, etc. The ideal way of referring to authority records through unique identifiers is in line with the current linked data principle. When presenting a bibliographic record, the linked authority records are expanded with the authoritative information. This way, any update in the authority records will not affect the indexing of the bibliographic records. The structural information in the authority files can also be leveraged to expand the user's query to retrieve bibliographic records associated with all the variations, narrower terms or related terms. However, in many digital libraries, especially largescale aggregations such as WorldCat and Europeana, name strings are often used instead of authority record identifiers. This is also partly due to the lack of global authority records that are valid across countries and cultural heritage domains. But even when there are global authority systems, they are not applied at scale. For example, in WorldCat, only 15% of the records have DDC and 3% have UDC codes; less than 40% of the records have one or more topical terms catalogued in the 650 MARC field, many of which are too general (such as "sports" or "literature") to be useful for retrieving bibliographic records. Therefore, when a user query is based on a Dewey code, the results usually have high precision but the recall is much lower than it should be; and, a search on a general topical term returns millions of hits without being even complete. All these practices make it difficult to leverage the key benefits of authority files. This is also true for authority files that have been transformed into linked data and enriched with mapping information. There are practical reasons for using name strings instead of identifiers. One is the indexing and query response. The future infrastructure design should take the performance into account while embracing the benefit of linking instead of copying, without introducing extra complexity to users. Notwithstanding all the restrictions, we argue that largescale aggregations also bring new opportunities for better exploiting the benefits of authority records. It is possible to use machine learning techniques to automatically link bibliographic records to authority records based on the manual input of cataloguers. Text mining and visualization techniques can offer a contextual view of authority records, which in return can be used to retrieve missing or mis-catalogued records. In this talk, we will describe such opportunities in more detail.
  16. Zedlitz, J.: Biographische Normdaten : ein Überblick (2017) 0.01
    0.0050237724 = product of:
      0.02009509 = sum of:
        0.02009509 = product of:
          0.04019018 = sum of:
            0.04019018 = weight(_text_:22 in 3502) [ClassicSimilarity], result of:
              0.04019018 = score(doc=3502,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23214069 = fieldWeight in 3502, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3502)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Archivar. 70(2017) H.1, S.22-25
  17. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.00
    0.0041864775 = product of:
      0.01674591 = sum of:
        0.01674591 = product of:
          0.03349182 = sum of:
            0.03349182 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
              0.03349182 = score(doc=3780,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.19345059 = fieldWeight in 3780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3780)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    19. 8.2017 9:24:22