Search (65 results, page 1 of 4)

  • × theme_ss:"Normdateien"
  1. Soergel, D.; Popescu, D.: Organization authority database design with classification principles (2015) 0.04
    0.0448143 = product of:
      0.0896286 = sum of:
        0.06843241 = weight(_text_:data in 2293) [ClassicSimilarity], result of:
          0.06843241 = score(doc=2293,freq=14.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.46216056 = fieldWeight in 2293, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2293)
        0.021196188 = product of:
          0.042392377 = sum of:
            0.042392377 = weight(_text_:processing in 2293) [ClassicSimilarity], result of:
              0.042392377 = score(doc=2293,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.22363065 = fieldWeight in 2293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2293)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    We illustrate the principle of unified treatment of all authority data for any kind of entities, subjects/topics, places, events, persons, organizations, etc. through the design and implementation of an enriched authority database for organizations, maintained as an integral part of an authority database that also includes subject authority control / classification data, using the same structures for data and common modules for processing and display of data. Organization-related data are stored in information systems of many companies. We specifically examine the case of the World Bank Group (WBG) according to organization role: suppliers, partners, customers, competitors, authors, publishers, or subjects of documents, loan recipients, suppliers for WBG-funded projects and subunits of the organization itself. A central organization authority where each organization is identified by a URI, represented by several names and linked to other organizations through hierarchical and other relationships serves to link data from these disparate information systems. Designing the conceptual structure of a unified authority database requires integrating SKOS, the W3C Organization Ontology and other schemes into one comprehensive ontology. To populate the authority database with organizations, we import data from external sources (e.g., DBpedia and Library of Congress authorities) and internal sources (e.g., the lists of organizations from multiple WBG information systems).
  2. Vellucci, S.L.: Metadata and authority control (2000) 0.04
    0.03670788 = product of:
      0.07341576 = sum of:
        0.051210128 = weight(_text_:data in 180) [ClassicSimilarity], result of:
          0.051210128 = score(doc=180,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 180, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=180)
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 180) [ClassicSimilarity], result of:
              0.044411276 = score(doc=180,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 180, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=180)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A variety of information communities have developed metadata schemes to meet the needs of their own users. The ability of libraries to incorporate and use multiple metadata schemes in current library systems will depend on the compatibility of imported data with existing catalog data. Authority control will play an important role in metadata interoperability. In this article, I discuss factors for successful authority control in current library catalogs, which include operation in a well-defined and bounded universe, application of principles and standard practices to access point creation, reference to authoritative lists, and bibliographic record creation by highly trained individuals. Metadata characteristics and environmental models are examined and the likelihood of successful authority control is explored for a variety of metadata environments.
    Date
    10. 9.2000 17:38:22
  3. Hill, L.L.; Frew, J.; Zheng, Q.: Geographic names : the implementation of a gazetteer in a georeferenced digital library (1999) 0.02
    0.023109939 = product of:
      0.046219878 = sum of:
        0.029262928 = weight(_text_:data in 1240) [ClassicSimilarity], result of:
          0.029262928 = score(doc=1240,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.19762816 = fieldWeight in 1240, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=1240)
        0.016956951 = product of:
          0.033913903 = sum of:
            0.033913903 = weight(_text_:processing in 1240) [ClassicSimilarity], result of:
              0.033913903 = score(doc=1240,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.17890452 = fieldWeight in 1240, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1240)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Alexandria Digital Library (ADL) Project has developed a content standard for gazetteer objects and a hierarchical type scheme for geographic features. Both of these developments are based on ADL experience with an earlier gazetteer component for the Library, based on two gazetteers maintained by the U.S. federal government. We define the minimum components of a gazetteer entry as (1) a geographic name, (2) a geographic location represented by coordinates, and (3) a type designation. With these attributes, a gazetteer can function as a tool for indirect spatial location identification through names and types. The ADL Gazetteer Content Standard supports contribution and sharing of gazetteer entries with rich descriptions beyond the minimum requirements. This paper describes the content standard, the feature type thesaurus, and the implementation and research issues. A gazetteer is list of geographic names, together with their geographic locations and other descriptive information. A geographic name is a proper name for a geographic place and feature, such as Santa Barbara County, Mount Washington, St. Francis Hospital, and Southern California. There are many types of printed gazetteers. For example, the New York Times Atlas has a gazetteer section that can be used to look up a geographic name and find the page(s) and grid reference(s) where the corresponding feature is shown. Some gazetteers provide information about places and features; for example, a history of the locale, population data, physical data such as elevation, or the pronunciation of the name. Some lists of geographic names are available as hierarchical term sets (thesauri) designed for information retreival; these are used to describe bibliographic or museum materials. Examples include the authority files of the U.S. Library of Congress and the GeoRef Thesaurus produced by the American Geological Institute. The Getty Museum has recently made their Thesaurus of Geographic Names available online. This is a major project to develop a controlled vocabulary of current and historical names to describe (i.e., catalog) art and architecture literature. U.S. federal government mapping agencies maintain gazetteers containing the official names of places and/or the names that appear on map series. Examples include the U.S. Geological Survey's Geographic Names Information System (GNIS) and the National Imagery and Mapping Agency's Geographic Names Processing System (GNPS). Both of these are maintained in cooperation with the U.S. Board of Geographic Names (BGN). Many other examples could be cited -- for local areas, for other countries, and for special purposes. There is remarkable diversity in approaches to the description of geographic places and no standardization beyond authoritative sources for the geographic names themselves.
  4. Kimura, M.: ¬A comparison of recorded authority data elements and the RDA Framework in Chinese character cultures (2015) 0.02
    0.022174634 = product of:
      0.088698536 = sum of:
        0.088698536 = weight(_text_:data in 2619) [ClassicSimilarity], result of:
          0.088698536 = score(doc=2619,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.59902847 = fieldWeight in 2619, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2619)
      0.25 = coord(1/4)
    
    Abstract
    To investigate which authority data elements are recorded by libraries in the Chinese character cultural sphere (e.g., Japan, Mainland China, Hong Kong, Taiwan, South Korea, and Vietnam), data elements recorded by each library were examined and compared to authority data elements defined in the standard Resource Description and Access (RDA) design. Recommendations were then made to libraries within this cultural sphere to improve and internationally standardize their authority data. In addition, suggestions are provided to modify RDA in an effort to increase compatibility with authority data in the Chinese character cultural sphere.
  5. Danowski, P.: Authority files and Web 2.0 : Wikipedia and the PND. An Example (2007) 0.02
    0.020863095 = product of:
      0.04172619 = sum of:
        0.02586502 = weight(_text_:data in 1291) [ClassicSimilarity], result of:
          0.02586502 = score(doc=1291,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 1291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
        0.01586117 = product of:
          0.03172234 = sum of:
            0.03172234 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
              0.03172234 = score(doc=1291,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.19345059 = fieldWeight in 1291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1291)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    More and more users index everything on their own in the web 2.0. There are services for links, videos, pictures, books, encyclopaedic articles and scientific articles. All these services are library independent. But must that really be? Can't libraries help with their experience and tools to make user indexing better? On the experience of a project from German language Wikipedia together with the German person authority files (Personen Namen Datei - PND) located at German National Library (Deutsche Nationalbibliothek) I would like to show what is possible. How users can and will use the authority files, if we let them. We will take a look how the project worked and what we can learn for future projects. Conclusions - Authority files can have a role in the web 2.0 - there must be an open interface/ service for retrieval - everything that is indexed on the net with authority files can be easy integrated in a federated search - O'Reilly: You have to found ways that your data get more important that more it will be used
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  6. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.02
    0.020863095 = product of:
      0.04172619 = sum of:
        0.02586502 = weight(_text_:data in 3780) [ClassicSimilarity], result of:
          0.02586502 = score(doc=3780,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 3780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3780)
        0.01586117 = product of:
          0.03172234 = sum of:
            0.03172234 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
              0.03172234 = score(doc=3780,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.19345059 = fieldWeight in 3780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3780)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Wir leben im 21. Jahrhundert, und vieles, was vor hundert und noch vor fünfzig Jahren als Science Fiction abgetan worden wäre, ist mittlerweile Realität. Raumsonden fliegen zum Mars, machen dort Experimente und liefern Daten zur Erde zurück. Roboter werden für Routineaufgaben eingesetzt, zum Beispiel in der Industrie oder in der Medizin. Digitalisierung, künstliche Intelligenz und automatisierte Verfahren sind kaum mehr aus unserem Alltag wegzudenken. Grundlage vieler Prozesse sind lernende Algorithmen. Die fortschreitende digitale Transformation ist global und umfasst alle Lebens- und Arbeitsbereiche: Wirtschaft, Gesellschaft und Politik. Sie eröffnet neue Möglichkeiten, von denen auch Bibliotheken profitieren. Der starke Anstieg digitaler Publikationen, die einen wichtigen und prozentual immer größer werdenden Teil des Kulturerbes darstellen, sollte für Bibliotheken Anlass sein, diese Möglichkeiten aktiv aufzugreifen und einzusetzen. Die Auswertbarkeit digitaler Inhalte, beispielsweise durch Text- and Data-Mining (TDM), und die Entwicklung technischer Verfahren, mittels derer Inhalte miteinander vernetzt und semantisch in Beziehung gesetzt werden können, bieten Raum, auch bibliothekarische Erschließungsverfahren neu zu denken. Daher beschäftigt sich die Deutsche Nationalbibliothek (DNB) seit einigen Jahren mit der Frage, wie sich die Prozesse bei der Erschließung von Medienwerken verbessern und maschinell unterstützen lassen. Sie steht dabei im regelmäßigen kollegialen Austausch mit anderen Bibliotheken, die sich ebenfalls aktiv mit dieser Fragestellung befassen, sowie mit europäischen Nationalbibliotheken, die ihrerseits Interesse an dem Thema und den Erfahrungen der DNB haben. Als Nationalbibliothek mit umfangreichen Beständen an digitalen Publikationen hat die DNB auch Expertise bei der digitalen Langzeitarchivierung aufgebaut und ist im Netzwerk ihrer Partner als kompetente Gesprächspartnerin geschätzt.
    Date
    19. 8.2017 9:24:22
  7. Provost, A. Le; Nicolas, .: IdRef, Paprika and Qualinka : atoolbox for authority data quality and interoperability (2020) 0.02
    0.02024258 = product of:
      0.08097032 = sum of:
        0.08097032 = weight(_text_:data in 1076) [ClassicSimilarity], result of:
          0.08097032 = score(doc=1076,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5468357 = fieldWeight in 1076, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1076)
      0.25 = coord(1/4)
    
    Abstract
    Authority data has always been at the core of library catalogues. Today, authority data is reference data on a wider scale. The former authorities of the "Sudoc" union catalogue mutated into "IdRef", a read/write platform of open data and services which seeks to become a national supplier of reliable identifiers for French universities. To support their dissemination and comply with high quality standards, Paprika and Qualinka have been added to our toolbox, to expedite the massive and secure linking of scientific publications to IdRef authorities.
  8. Taniguchi, S.: Data provenance and administrative information in library linked data : reviewing RDA in RDF, BIBFRAME, and Wikidata (2024) 0.02
    0.018105512 = product of:
      0.07242205 = sum of:
        0.07242205 = weight(_text_:data in 1154) [ClassicSimilarity], result of:
          0.07242205 = score(doc=1154,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48910472 = fieldWeight in 1154, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1154)
      0.25 = coord(1/4)
    
    Abstract
    We examined how data provenance and additional information of element values including nomens, and administrative information on the metadata should be modeled and represented in the Resource Description Framework (RDF) for linked data of library catalogs. First, we classified such information types into categories and organized the combination with recording-units, i.e., a description statement or description set. Next, we listed the appropriate RDF representation patterns for each recording-unit. Then, we reviewed the methods to examine such information in Resource Description and Access (RDA) in RDF, BIBFRAME, and Wikidata, and pointed out the issues involved in them.
  9. Chen, S.-J.: Semantic enrichment of linked personal authority data : a case study of elites in late imperial China (2019) 0.02
    0.015839024 = product of:
      0.063356094 = sum of:
        0.063356094 = weight(_text_:data in 5642) [ClassicSimilarity], result of:
          0.063356094 = score(doc=5642,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4278775 = fieldWeight in 5642, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5642)
      0.25 = coord(1/4)
    
    Abstract
    The study uses the Database of Names and Biographies (DNB) as an example to explore how in the transformation of original data into linked data, semantic enrichment can enhance engagement in digital humanities. In the preliminary results, we have defined instance-based and schema-based categories of semantic enrichment. In the instance-based category, in which enrichment occurs by enhancing the content of entities, we further determined three types, including: 1) enriching the entities by linking to diverse external resources in order to provide additional data of multiple perspectives; 2) enriching the entities with missing data, which is needed to satisfy the semantic queries; and, 3) providing the entities with access to an extended knowledge base. In the schema-based categories that enrichment occurs by enhancing the relations between the properties, we have identified two types, including: 1) enriching the properties by defining the hierarchical relations between properties; and, 2) specifying properties' domain and range for data reasoning. In addition, the study implements the LOD dataset in a digital humanities platform to demonstrate how instances and entities can be applied in the full texts where the relationship between entities are highlighted in order to bring scholars more semantic details of the texts.
  10. Patton, G.E.: FRANAR: a conceptual model for authority data (2004) 0.02
    0.015679834 = product of:
      0.06271934 = sum of:
        0.06271934 = weight(_text_:data in 5661) [ClassicSimilarity], result of:
          0.06271934 = score(doc=5661,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 5661, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5661)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the work of the IFLA Working Group of Functional Requirements and Numbering of Authority Records. Describes the activities of the group to build liaison relationships with other sectors of the information community that create and maintain data which is similar to library authority files. Provides a description of the entity-relationship model being developed by the Working Group to extend the FRBR model to cover authority data. (Note: Readers should be aware that the Working Group's entity-relationship model has changed considerably since this paper was written in December 2002.)
  11. Zhu, L.; Xu, A.; Deng, S.; Heng, G.; Li, X.: Entity management using Wikidata for cultural heritage information (2024) 0.02
    0.015679834 = product of:
      0.06271934 = sum of:
        0.06271934 = weight(_text_:data in 975) [ClassicSimilarity], result of:
          0.06271934 = score(doc=975,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 975, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=975)
      0.25 = coord(1/4)
    
    Abstract
    Entity management in a Linked Open Data (LOD) environment is a process of associating a unique, persistent, and dereferenceable Uniform Resource Identifier (URI) with a single entity. It allows data from various sources to be reused and connected to the Web. It can help improve data quality and enable more efficient workflows. This article describes a semi-automated entity management project conducted by the "Wikidata: WikiProject Chinese Culture and Heritage Group," explores the challenges and opportunities in describing Chinese women poets and historical places in Wikidata, the largest crowdsourcing LOD platform in the world, and discusses lessons learned and future opportunities.
  12. Silvester, J.P.; Klingbiel, P.H.: ¬An operational system for subject switching between controlled vocabularies (1993) 0.01
    0.014837332 = product of:
      0.05934933 = sum of:
        0.05934933 = product of:
          0.11869866 = sum of:
            0.11869866 = weight(_text_:processing in 4357) [ClassicSimilarity], result of:
              0.11869866 = score(doc=4357,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.6261658 = fieldWeight in 4357, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4357)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 29(1993) no.1, S.47-59
  13. Patton, G.E.: FRAR: extending FRBR concepts to authority data (2005) 0.01
    0.014631464 = product of:
      0.058525857 = sum of:
        0.058525857 = weight(_text_:data in 4228) [ClassicSimilarity], result of:
          0.058525857 = score(doc=4228,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3952563 = fieldWeight in 4228, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=4228)
      0.25 = coord(1/4)
    
    Abstract
    The IFLA FRANAR Working Group is charged with extending the concepts of the IFLA Functional Requirements for Bibliographic Records to authority data. The paper reports on the current state of the Working Group's activities.
  14. Bourdon, F.: Modeling authority data for libraries, archives, and museums : a project in progress at AFNOR (2004) 0.01
    0.014631464 = product of:
      0.058525857 = sum of:
        0.058525857 = weight(_text_:data in 5690) [ClassicSimilarity], result of:
          0.058525857 = score(doc=5690,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3952563 = fieldWeight in 5690, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=5690)
      0.25 = coord(1/4)
    
    Abstract
    To give a national basis to the considerations developed at IFLA with FRANAR, a working group devoted to modelling authority data was created in the framework of the French Organization for Standardization (AFNOR) in 2000. The Working Group aims at developing interoperability among libraries, archives and museums. Composition, goals, and the working plan of this Group are presented.
  15. MacEwan, A.: Project InterParty : from library authority files to e-commerce (2004) 0.01
    0.014458986 = product of:
      0.057835944 = sum of:
        0.057835944 = weight(_text_:data in 5687) [ClassicSimilarity], result of:
          0.057835944 = score(doc=5687,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.39059696 = fieldWeight in 5687, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5687)
      0.25 = coord(1/4)
    
    Abstract
    InterParty is a project that aims to develop a mechanism that will enable the interoperation of identifiers for "parties" or persons (authors, publishers, etc. - persons and corporate bodies in library authority files) across multiple domains. Partners represent the book industry, rights management, libraries, and identifier and technology communities, united by their perception of a common benefit from interoperation in terms of access to "common metadata" held by other members to improve the quality of their own data. The InterParty solution proposes a distributed network of members who provide access to "common metadata," defined as information in the public domain, sufficient to identify and distinguish the "public identity" of a person. At a minimum the InterParty network would provide access to multiple domains of data about persons, including multiple library authority files, author licensing data files, etc. It will also add value by providing a facility for linking records between different data files by means of a "link record." Link records will assert that an identity recorded in one database is the same as another identity recorded in another database. Linked data will be mutually enriching and therefore more reliable and supportive of accurate disambiguation of persons within and between databases. InterParty has potential to develop a common system that supports both the emerging needs of e-commerce and the traditional requirements of library authority control.
  16. Tillett, B.B.: Complementarity of perspectives for resource descriptions (2015) 0.01
    0.014458986 = product of:
      0.057835944 = sum of:
        0.057835944 = weight(_text_:data in 2288) [ClassicSimilarity], result of:
          0.057835944 = score(doc=2288,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.39059696 = fieldWeight in 2288, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2288)
      0.25 = coord(1/4)
    
    Abstract
    Bibliographic data is used to describe resources held in the collections of libraries, archives and museums. That data is mostly available on the Web today and mostly as linked data. Also on the Web are the controlled vocabulary systems of name authority files, like the Virtual International Authority File (VIAF), classification systems, and subject terms. These systems offer their own linked data to potentially help users find the information they want - whether at their local library or anywhere in the world that is willing to make their resources available. We have found it beneficial to merge authority data for names on a global level, as the entities are relatively clear. That is not true for subject concepts and terminology that have categorisation systems developed according to varying principles and schemes and are in multiple languages. Rather than requiring everyone in the world to use the same categorisation/classification system in the same language, we know that the Web offers us the opportunity to add descriptors assigned around the world using multiple systems from multiple perspectives to identify our resources. Those descriptors add value to refine searches, help users worldwide and share globally what each library does locally.
  17. Cree, J.S.: Data conversion and migration at the libraries of the Home Office and the Department of the Environment (1997) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 2175) [ClassicSimilarity], result of:
          0.053759433 = score(doc=2175,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 2175, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2175)
      0.25 = coord(1/4)
    
    Abstract
    Describes the experience of data conversion and migration at the libraries of the Home Office (HO) and the Dept. of the Environment (DoE), UK. Both HO and DoE libraries had changed from Anglo-American code cataloguing to AACR2 cataloguing in the mid-1970s. Both libraries were selective in identifying records for conversion initially to BLAISE-LOCAS. Conversion to integrated library systems from BLAISE-LOCAS MARC tapes produced problems in both libraries with location/holdings fields which were largely resolved at HO, but not resolved at DoE. HO experienced problems converting to a system with fixed field lengths. HO converted subject keywords to form a rudimentary, non-standard thesaurus which required the addition of Broader Term and Narrower Term to meet the challenge of computerized searching. DoE converted a non-thesaurus subject index to an authority file, but continued to maintain the index on a stand-alone DataEase application for use by cataloguers. Neither library converted acquisitions data
  18. French, J.C.; Powell, A.L.; Schulman, E.: Using clustering strategies for creating authority files (2000) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 4811) [ClassicSimilarity], result of:
          0.053759433 = score(doc=4811,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 4811, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4811)
      0.25 = coord(1/4)
    
    Abstract
    As more online databases are integrated into digital libraries, the issue of quality control of the data becomes increasingly important, especially as it relates to the effective retrieval of information. Authority work, the need to discover and reconcile variant forms of strings in bibliographical entries, will become more critical in the future. Spelling variants, misspellings, and transliteration differences will all increase the difficulty of retrieving information. We investigate a number of approximate string matching techniques that have traditionally been used to help with this problem. We then introduce the notion of approximate word matching and show how it can be used to improve detection and categorization of variant forms. We demonstrate the utility of these approaches using data from the Astrophysics Data System and show how we can reduce the human effort involved in the creation of authority files
  19. Altenhöner, R.; Hannemann, J.; Kett, J.: Linked Data aus und für Bibliotheken : Rückgratstärkung im Semantic Web (2010) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 4264) [ClassicSimilarity], result of:
          0.053759433 = score(doc=4264,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 4264, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4264)
      0.25 = coord(1/4)
    
    Abstract
    Die Deutsche Nationalbibliothek (DNB) hat damit begonnen, ihre Wissensbasis bestehend aus bibliografischen Daten einerseits, vor allem aber aus den Normdaten als Linked Data zu veröffentlichen. Ziel der DNB ist es, mit der Publikation der Daten als Tripel eine direkte Nutzung der gesamten nationalbibliografischen Daten und der Normdaten durch die Semantic-WebCommunity zu ermöglichen und damit ganz neue Nutzungsgruppen einzubinden. Gleichzeitig soll aber auch das Tor für einen neuen Weg der kooperativen Datennutzung aufgestoßen werden. Langfristiges Ziel ist es, Bibliotheken und andere kulturelle Einrichtungen als ein verlässliches Rückgrat des Webs der Daten zu etablieren.
    Source
    Semantic web & linked data: Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis. Hrsg.: M. Ockenfeld
  20. Barbalet, S.: Enhancing subject authority control at the UK Data Archive : a pilot study using UDC (2015) 0.01
    0.01293251 = product of:
      0.05173004 = sum of:
        0.05173004 = weight(_text_:data in 2306) [ClassicSimilarity], result of:
          0.05173004 = score(doc=2306,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34936053 = fieldWeight in 2306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=2306)
      0.25 = coord(1/4)
    

Years

Languages

  • e 51
  • d 13

Types

  • a 55
  • el 13
  • b 2
  • m 2
  • r 1
  • More… Less…