Search (603 results, page 1 of 31)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.30
    0.30119002 = product of:
      0.50198334 = sum of:
        0.11927389 = product of:
          0.35782167 = sum of:
            0.35782167 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.35782167 = score(doc=1826,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.35782167 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.35782167 = score(doc=1826,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.024887787 = product of:
          0.049775574 = sum of:
            0.049775574 = weight(_text_:data in 1826) [ClassicSimilarity], result of:
              0.049775574 = score(doc=1826,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.34936053 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Content
    Präsentation anlässlich: European Conference on Data Analysis (ECDA 2014) in Bremen, Germany, July 2nd to 4th 2014, LIS-Workshop.
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.15
    0.15267058 = product of:
      0.38167644 = sum of:
        0.09541911 = product of:
          0.28625733 = sum of:
            0.28625733 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.28625733 = score(doc=230,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.28625733 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.28625733 = score(doc=230,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.4 = coord(2/5)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.13
    0.12976845 = product of:
      0.21628073 = sum of:
        0.11275144 = weight(_text_:readable in 38) [ClassicSimilarity], result of:
          0.11275144 = score(doc=38,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.4072887 = fieldWeight in 38, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
        0.06402116 = weight(_text_:bibliographic in 38) [ClassicSimilarity], result of:
          0.06402116 = score(doc=38,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.3649729 = fieldWeight in 38, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
        0.03950814 = product of:
          0.07901628 = sum of:
            0.07901628 = weight(_text_:data in 38) [ClassicSimilarity], result of:
              0.07901628 = score(doc=38,freq=14.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.55459267 = fieldWeight in 38, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=38)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    A variety of schemas and ontologies are currently used for the machine-readable description of bibliographic entities and citations. This diversity, and the reuse of the same ontology terms with different nuances, generates inconsistencies in data. Adoption of a single data model would facilitate data integration tasks regardless of the data supplier or context application. In this paper we present the OpenCitations Data Model (OCDM), a generic data model for describing bibliographic entities and citations, developed using Semantic Web technologies. We also evaluate the effective reusability of OCDM according to ontology evaluation practices, mention existing users of OCDM, and discuss the use and impact of OCDM in the wider open science community.
  4. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.10
    0.09541912 = product of:
      0.23854779 = sum of:
        0.059636947 = product of:
          0.17891084 = sum of:
            0.17891084 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.17891084 = score(doc=4388,freq=2.0), product of:
                0.38200375 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.17891084 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.17891084 = score(doc=4388,freq=2.0), product of:
            0.38200375 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04505818 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.4 = coord(2/5)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  5. Klic, L.; Miller, M.; Nelson, J.K.; Pattuelli, C.; Provo, A.: ¬The drawings of the Florentine painters : from print catalog to linked open data (2017) 0.06
    0.058456767 = product of:
      0.14614192 = sum of:
        0.11275144 = weight(_text_:readable in 4105) [ClassicSimilarity], result of:
          0.11275144 = score(doc=4105,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.4072887 = fieldWeight in 4105, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.046875 = fieldNorm(doc=4105)
        0.03339047 = product of:
          0.06678094 = sum of:
            0.06678094 = weight(_text_:data in 4105) [ClassicSimilarity], result of:
              0.06678094 = score(doc=4105,freq=10.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46871632 = fieldWeight in 4105, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4105)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Drawings of The Florentine Painters project created the first online database of Florentine Renaissance drawings by applying Linked Open Data (LOD) techniques to a foundational text of the same name, first published by Bernard Berenson in 1903 (revised and expanded editions, 1938 and 1961). The goal was to make Berenson's catalog information-still an essential information resource today-available in a machine-readable format, allowing researchers to access the source content through open data services. This paper provides a technical overview of the methods and processes applied in the conversion of Berenson's catalog to LOD using the CIDOC-CRM ontology; it also discusses the different phases of the project, focusing on the challenges and issues of data transformation and publishing. The project was funded by the Samuel H. Kress Foundation and organized by Villa I Tatti, The Harvard University Center for Italian Renaissance Studies. Catalog: http://florentinedrawings.itatti.harvard.edu. Data Endpoint: http://data.itatti.harvard.edu.
  6. OWL Web Ontology Language Guide (2004) 0.06
    0.058129102 = product of:
      0.14532275 = sum of:
        0.13287885 = weight(_text_:readable in 4687) [ClassicSimilarity], result of:
          0.13287885 = score(doc=4687,freq=4.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.47999436 = fieldWeight in 4687, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4687)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 4687) [ClassicSimilarity], result of:
              0.024887787 = score(doc=4687,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 4687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4687)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The World Wide Web as it is currently constituted resembles a poorly mapped geography. Our insight into the documents and capabilities available are based on keyword searches, abetted by clever use of document connectivity and usage patterns. The sheer mass of this data is unmanageable without powerful tool support. In order to map this terrain more precisely, computational agents require machine-readable descriptions of the content and capabilities of Web accessible resources. These descriptions must be in addition to the human-readable versions of that information. The OWL Web Ontology Language is intended to provide a language that can be used to describe the classes and relations between them that are inherent in Web documents and applications. This document demonstrates the use of the OWL language to - formalize a domain by defining classes and properties of those classes, - define individuals and assert properties about them, and - reason about these classes and individuals to the degree permitted by the formal semantics of the OWL language. The sections are organized to present an incremental definition of a set of classes, properties and individuals, beginning with the fundamentals and proceeding to more complex language components.
  7. O'Neill, E.T.: ¬The FRBRization of Humphry Clinker : a case study in the application of IFLA's Functional Requirements for Bibliographic Records (FRBR) (2002) 0.06
    0.05719 = product of:
      0.142975 = sum of:
        0.12804233 = weight(_text_:bibliographic in 2433) [ClassicSimilarity], result of:
          0.12804233 = score(doc=2433,freq=16.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.7299458 = fieldWeight in 2433, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=2433)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 2433) [ClassicSimilarity], result of:
              0.029865343 = score(doc=2433,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 2433, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2433)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The goal of OCLC's FRBR projects is to examine issues associated with the conversion of a set of bibliographic records to conform to FRBR requirements (a process referred to as "FRBRization"). The goals of this FRBR project were to: - examine issues associated with creating an entity-relationship model for (i.e., "FRBRizing") a non-trivial work - better understand the relationship between the bibliographic records and the bibliographic objects they represent - determine if the information available in the bibliographic record is sufficient to reliably identify the FRBR entities - to develop a data set that could be used to evaluate FRBRization algorithms. Using an exemplary work as a case study, lead scientist Ed O'Neill sought to: - better understand the relationship between bibliographic records and the bibliographic objects they represent - determine if the information available in the bibliographic records is sufficient to reliably identify FRBR entities.
  8. Vizine-Goetz, D.; Hickey, C.; Houghton, A.; Thompson, R.: Vocabulary mapping for terminology services (2004) 0.05
    0.051073648 = product of:
      0.12768412 = sum of:
        0.11275144 = weight(_text_:readable in 918) [ClassicSimilarity], result of:
          0.11275144 = score(doc=918,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.4072887 = fieldWeight in 918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.046875 = fieldNorm(doc=918)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 918) [ClassicSimilarity], result of:
              0.029865343 = score(doc=918,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=918)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The paper describes a project to add value to controlled vocabularies by making inter-vocabulary associations. A methodology for mapping terms from one vocabulary to another is presented in the form of a case study applying the approach to the Educational Resources Information Center (ERIC) Thesaurus and the Library of Congress Subject Headings (LCSH). Our approach to mapping involves encoding vocabularies according to Machine-Readable Cataloging (MARC) standards, machine matching of vocabulary terms, and categorizing candidate mappings by likelihood of valid mapping. Mapping data is then stored as machine links. Vocabularies with associations to other schemes will be a key component of Web-based terminology services. The paper briefly describes how the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) is used to provide access to a vocabulary with mappings.
  9. Dini, L.: CACAO : multilingual access to bibliographic records (2007) 0.05
    0.05086726 = product of:
      0.12716815 = sum of:
        0.09053959 = weight(_text_:bibliographic in 126) [ClassicSimilarity], result of:
          0.09053959 = score(doc=126,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.5161496 = fieldWeight in 126, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.09375 = fieldNorm(doc=126)
        0.036628567 = product of:
          0.07325713 = sum of:
            0.07325713 = weight(_text_:22 in 126) [ClassicSimilarity], result of:
              0.07325713 = score(doc=126,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46428138 = fieldWeight in 126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=126)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  10. Manguinhas, H.; Freire, N.; Machado, J.; Borbinha, J.: Supporting multilingual bibliographic resource discovery with Functional Requirements for Bibliographic Records (2012) 0.05
    0.050247353 = product of:
      0.12561838 = sum of:
        0.11317448 = weight(_text_:bibliographic in 133) [ClassicSimilarity], result of:
          0.11317448 = score(doc=133,freq=18.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.64518696 = fieldWeight in 133, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=133)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 133) [ClassicSimilarity], result of:
              0.024887787 = score(doc=133,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper describes an experiment exploring the hypothesis that innovative application of the Functional Require-ments for Bibliographic Records (FRBR) principles can complement traditional bibliographic resource discovery systems in order to improve the user experience. A specialized service was implemented that, when given a plain list of results from a regular online catalogue, was able to process, enrich and present that list in a more relevant way for the user. This service pre-processes the records of a traditional online catalogue in order to build a semantic structure following the FRBR model. The service also explores web search features that have been revolutionizing the way users conceptualize resource discovery, such as relevance ranking and metasearching. This work was developed in the context of the TELPlus project. We processed nearly one hundred thousand bibliographic and authority records, in multiple languages, and originating from twelve European na-tional libraries. This paper describes the architecture of the service and the main challenges faced, especially concerning the extraction and linking of the relevant FRBR entities from the bibliographic metadata produced by the libraries. The service was evaluated by end users, who filled out a questionnaire after using a traditional online catalogue and the new service, both with the same bibliographic collection. The analysis of the results supports the hypothesis that FRBR can be implemented for re-source discovery in a non-intrusive way, reusing the data of any existing traditional bibliographic system.
    Content
    Beitrag eines Schwerpunktthemas: Semantic Web and Reasoning for Cultural Heritage and Digital Libraries: Vgl.: http://www.semantic-web-journal.net/content/supporting-multilingual-bibliographic-resource-discovery-functional-requirements-bibliograph http://www.semantic-web-journal.net/sites/default/files/swj145_2.pdf.
  11. FictionFinder : a FRBR-based prototype for fiction in WorldCat (o.J.) 0.05
    0.04922039 = product of:
      0.12305097 = sum of:
        0.105629526 = weight(_text_:bibliographic in 2432) [ClassicSimilarity], result of:
          0.105629526 = score(doc=2432,freq=8.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.6021745 = fieldWeight in 2432, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2432)
        0.01742145 = product of:
          0.0348429 = sum of:
            0.0348429 = weight(_text_:data in 2432) [ClassicSimilarity], result of:
              0.0348429 = score(doc=2432,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24455236 = fieldWeight in 2432, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2432)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    FictionFinder is a FRBR-based prototype that provides access to over 2.9 million bibliographic records for fiction books, eBooks, and audio materials described in OCLC WorldCat. This project applies principles of the FRBR model to aggregate bibliographic information above the manifestation level. Records are clustered into works using the OCLC FRBR Work-Set Algorithm. The algorithm collects bibliographic records into groups based on author and title information from bibliographic and authority records. Author names and titles are normalized to construct a key. All records with the same key are grouped together in a work set.
    Source
    http://www.oclc.org/research/themes/data-science/fictionfinder.html
  12. Nicholson, D.: Help us make HILT's terminology services useful in your information service (2008) 0.05
    0.04871397 = product of:
      0.121784925 = sum of:
        0.09395953 = weight(_text_:readable in 3654) [ClassicSimilarity], result of:
          0.09395953 = score(doc=3654,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 3654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3654)
        0.027825395 = product of:
          0.05565079 = sum of:
            0.05565079 = weight(_text_:data in 3654) [ClassicSimilarity], result of:
              0.05565079 = score(doc=3654,freq=10.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.39059696 = fieldWeight in 3654, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3654)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The JISC-funded HILT project is looking to make contact with staff in information services or projects interested in helping it test and refine its developing terminology services. The project is currently working to create pilot web services that will deliver machine-readable terminology and cross-terminology mappings data likely to be useful to information services wishing to extend or enhance the efficacy of their subject search or browse services. Based on SRW/U, SOAP, and SKOS, the HILT facilities, when fully operational, will permit such services to improve their own subject search and browse mechanisms by using HILT data in a fashion transparent to their users. On request, HILT will serve up machine-processable data on individual subject schemes (broader terms, narrower terms, hierarchy information, preferred and non-preferred terms, and so on) and interoperability data (usually intellectual or automated mappings between schemes, but the architecture allows for the use of other methods) - data that can be used to enhance user services. The project is also developing an associated toolkit that will help service technical staff to embed HILT-related functionality into their services. The primary aim is to serve JISC funded information services or services at JISC institutions, but information services outside the JISC domain may also find the proposed services useful and wish to participate in the test and refine process.
  13. Anderson, C.: ¬The end of theory : the data deluge makes the scientific method obsolete (2008) 0.04
    0.044623144 = product of:
      0.111557856 = sum of:
        0.09395953 = weight(_text_:readable in 2819) [ClassicSimilarity], result of:
          0.09395953 = score(doc=2819,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 2819, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2819)
        0.017598324 = product of:
          0.035196647 = sum of:
            0.035196647 = weight(_text_:data in 2819) [ClassicSimilarity], result of:
              0.035196647 = score(doc=2819,freq=4.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24703519 = fieldWeight in 2819, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2819)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    So proclaimed statistician George Box 30 years ago, and he was right. But what choice did we have? Only models, from cosmological equations to theories of human behavior, seemed to be able to consistently, if imperfectly, explain the world around us. Until now. Today companies like Google, which have grown up in an era of massively abundant data, don't have to settle for wrong models. Indeed, they don't have to settle for models at all. Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition. They are the children of the Petabyte Age. The Petabyte Age is different because more is different. Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to - well, at petabytes we ran out of organizational analogies.
  14. Svensson, L.G.; Jahns, Y.: PDF, CSV, RSS and other Acronyms : redefining the bibliographic services in the German National Library (2010) 0.04
    0.04369723 = product of:
      0.10924307 = sum of:
        0.08435529 = weight(_text_:bibliographic in 3970) [ClassicSimilarity], result of:
          0.08435529 = score(doc=3970,freq=10.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.480894 = fieldWeight in 3970, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3970)
        0.024887787 = product of:
          0.049775574 = sum of:
            0.049775574 = weight(_text_:data in 3970) [ClassicSimilarity], result of:
              0.049775574 = score(doc=3970,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.34936053 = fieldWeight in 3970, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3970)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In January 2010, the German National Library discontinued the print version of the national bibliography and replaced it with an online journal. This was the first step in a longer process of redefining the National Library's bibliographic services, leaving the field of traditional media - e. g. paper or CD-ROM databases - and focusing on publishing its data over the WWW. A new business model was set up - all web resources are now published in an extra bibliography series and the bibliographic data are freely available. Step by step the prices of the other bibliographic data will be also reduced. In the second stage of the project, the focus is on value-added services based on the National Library's catalogue. The main purpose is to introduce alerting services based on the user's search criteria offering different access methods such as RSS feeds, integration with e. g. Zotero, or export of the bibliographic data as a CSV or PDF file. Current standards of cataloguing remain a guide line to offer high-value end-user retrieval but they will be supplemented by automated indexing procedures to find & browse the growing number of documents. A transparent cataloguing policy and wellarranged selection menus are aimed.
  15. Gómez-Pérez, A.; Corcho, O.: Ontology languages for the Semantic Web (2015) 0.04
    0.04256137 = product of:
      0.106403425 = sum of:
        0.09395953 = weight(_text_:readable in 3297) [ClassicSimilarity], result of:
          0.09395953 = score(doc=3297,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 3297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 3297) [ClassicSimilarity], result of:
              0.024887787 = score(doc=3297,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 3297, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3297)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as "the conceptual structuring of the Web in an explicit machine-readable way."1 This definition does not differ too much from the one used for defining an ontology: "An ontology is an explicit, machinereadable specification of a shared conceptualization."2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving the heterogeneous data exchange in this heterogeneous environment. Here, we don't decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs. The authors analyze the most representative ontology languages created for the Web and compare them using a common framework.
  16. Open Knowledge Foundation: Prinzipien zu offenen bibliographischen Daten (2011) 0.04
    0.042311918 = product of:
      0.1057798 = sum of:
        0.03772483 = weight(_text_:bibliographic in 4399) [ClassicSimilarity], result of:
          0.03772483 = score(doc=4399,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.21506234 = fieldWeight in 4399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4399)
        0.06805497 = sum of:
          0.024887787 = weight(_text_:data in 4399) [ClassicSimilarity], result of:
            0.024887787 = score(doc=4399,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.17468026 = fieldWeight in 4399, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4399)
          0.04316718 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
            0.04316718 = score(doc=4399,freq=4.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.27358043 = fieldWeight in 4399, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4399)
      0.4 = coord(2/5)
    
    Date
    22. 3.2011 18:22:29
    Footnote
    Original unter: http://openbiblio.net/principles/ (Open Bibliography and Open Bibliographic Data)
  17. McGrath, K.; Kules, B.; Fitzpatrick, C.: FRBR and facets provide flexible, work-centric access to items in library collections (2011) 0.04
    0.04194648 = product of:
      0.10486619 = sum of:
        0.074691355 = weight(_text_:bibliographic in 2430) [ClassicSimilarity], result of:
          0.074691355 = score(doc=2430,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.4258017 = fieldWeight in 2430, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2430)
        0.03017484 = product of:
          0.06034968 = sum of:
            0.06034968 = weight(_text_:data in 2430) [ClassicSimilarity], result of:
              0.06034968 = score(doc=2430,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.42357713 = fieldWeight in 2430, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2430)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper explores a technique to improve searcher access to library collections by providing a faceted search interface built on a data model based on the Functional Requirements for Bibliographic Records (FRBR). The prototype provides a Workcentric view of a moving image collection that is integrated with bibliographic and holdings data. Two sets of facets address important user needs: "what do you want?" and "how/where do you want it?" enabling patrons to narrow, broaden and pivot across facet values instead of limiting them to the tree-structured hierarchy common with existing FRBR applications. The data model illustrates how FRBR is being adapted and applied beyond the traditional library catalog.
  18. Campbell, D.G.; Mayhew, A.: ¬A phylogenetic approach to bibliographic families and relationships (2017) 0.04
    0.04194019 = product of:
      0.10485047 = sum of:
        0.09240658 = weight(_text_:bibliographic in 3875) [ClassicSimilarity], result of:
          0.09240658 = score(doc=3875,freq=12.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.52679294 = fieldWeight in 3875, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3875)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 3875) [ClassicSimilarity], result of:
              0.024887787 = score(doc=3875,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 3875, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3875)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This presentation applies the principles of phylogenetic classification to the phenomenon of bibliographic relationships in library catalogues. We argue that while the FRBR paradigm supports hierarchical bibliographic relationships between works and their various expressions and manifestations, we need a different paradigm to support associative bibliographic relationships of the kind detected in previous research. Numerous studies have shown the existence and importance of bibliographic relationships that lie outside that hierarchical FRBR model: particularly the importance of bibliographic families. We would like to suggest phylogenetics as a potential means of gaining access to those more elusive and ephemeral relationships. Phylogenetic analysis does not follow the Platonic conception of an abstract work that gives rise to specific instantiations; rather, it tracks relationships of kinship as they evolve over time. We use two examples to suggest ways in which phylogenetic trees could be represented in future library catalogues. The novels of Jane Austen are used to indicate how phylogenetic trees can represent, with greater accuracy, the line of Jane Austen adaptations, ranging from contemporary efforts to complete her unfinished work, through to the more recent efforts to graft horror memes onto the original text. Stanley Kubrick's 2001: A Space Odyssey provides an example of charting relationships both backwards and forwards in time, across different media and genres. We suggest three possible means of applying phylogenetic s in the future: enhancement of the relationship designators in RDA, crowdsourcing user tags, and extracting relationship trees through big data analysis.
  19. Petric, T.: Bibliographic organisation of continuing resources in relation to the IFLA models : research within the Croatian corpus of continuing resources (2016) 0.04
    0.038126666 = product of:
      0.09531666 = sum of:
        0.08536155 = weight(_text_:bibliographic in 2960) [ClassicSimilarity], result of:
          0.08536155 = score(doc=2960,freq=16.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.4866305 = fieldWeight in 2960, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=2960)
        0.009955115 = product of:
          0.01991023 = sum of:
            0.01991023 = weight(_text_:data in 2960) [ClassicSimilarity], result of:
              0.01991023 = score(doc=2960,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.1397442 = fieldWeight in 2960, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2960)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Comprehensive research on continuing resources has not been conducted in Croatia, therefore this paper will indicate the current bibliographic organisation of continuing resources in comparison to the parameters set by the IFLA models, and the potential flaws of the IFLA models in the bibliographic organisation of continuing resources, in comparison to the valid national code which is used in Croatian cataloguing practice. Research on the corpus of Croatian continuing resources was performed in the period from 2000 and 2011. By using the listed population through the method of deliberate stratified sampling, the titles which had been observed were selected. Through the method of observation of bibliographic records of the selected sample in the NUL catalogue, the frequency of occurrence of parameters from the IFLA models that should identify continuing resources will be recorded and should also show the characteristics of continuing resources. In determining the parameters of observation, the FRBR model is viewed in terms of bibliographic data, FRAD is viewed in terms of other groups or entities or controlled access points for work, person and the corporate body and FRSAD in terms of the third group of entities as the subject or the subject access to continuing resources. Research results indicate that the current model of bibliographic organisation presents a high frequency of attributes that are listed in the IFLA models for all types of resources, although that was not envisaged by the PPIAK, and it is clear that the practice has moved away from the national code which does not offer solutions for all types of resources and ever more so demanding users. The current model of bibliographic organisation of the corpus of Croatian continuing resources in regards to the new IFLA model requires certain changes in order for the user to more easily access and identify continuing resources. The research results also indicate the need to update the entity expression with the attribute mode of expression, and entity manifestation with the attributes mode of issuance, as well as further consideration in terms of the bibliographic organisation of continuing resources.
  20. Blosser, J.; Michaelson, R.; Routh. R.; Xia, P.: Defining the landscape of Web resources : Concluding Report of the BAER Web Resources Sub-Group (2000) 0.04
    0.038102854 = product of:
      0.09525713 = sum of:
        0.042680774 = weight(_text_:bibliographic in 1447) [ClassicSimilarity], result of:
          0.042680774 = score(doc=1447,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.24331525 = fieldWeight in 1447, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=1447)
        0.052576363 = sum of:
          0.028157318 = weight(_text_:data in 1447) [ClassicSimilarity], result of:
            0.028157318 = score(doc=1447,freq=4.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.19762816 = fieldWeight in 1447, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.03125 = fieldNorm(doc=1447)
          0.024419045 = weight(_text_:22 in 1447) [ClassicSimilarity], result of:
            0.024419045 = score(doc=1447,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.15476047 = fieldWeight in 1447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1447)
      0.4 = coord(2/5)
    
    Abstract
    The BAER Web Resources Group was charged in October 1999 with defining and describing the parameters of electronic resources that do not clearly belong to the categories being defined by the BAER Digital Group or the BAER Electronic Journals Group. After some difficulty identifying precisely which resources fell under the Group's charge, we finally named the following types of resources for our consideration: web sites, electronic texts, indexes, databases and abstracts, online reference resources, and networked and non-networked CD-ROMs. Electronic resources are a vast and growing collection that touch nearly every department within the Library. It is unrealistic to think one department can effectively administer all aspects of the collection. The Group then began to focus on the concern of bibliographic access to these varied resources, and to define parameters for handling or processing them within the Library. Some key elements became evident as the work progressed. * Selection process of resources to be acquired for the collection * Duplication of effort * Use of CORC * Resource Finder design * Maintenance of Resource Finder * CD-ROMs not networked * Communications * Voyager search limitations. An unexpected collaboration with the Web Development Committee on the Resource Finder helped to steer the Group to more detailed descriptions of bibliographic access. This collaboration included development of data elements for the Resource Finder database, and some discussions on Library staff processing of the resources. The Web Resources Group invited expert testimony to help the Group broaden its view to envision public use of the resources and discuss concerns related to technical services processing. The first testimony came from members of the Resource Finder Committee. Some background information on the Web Development Resource Finder Committee was shared. The second testimony was from librarians who select electronic texts. Three main themes were addressed: accessing CD-ROMs; the issue of including non-networked CD-ROMs in the Resource Finder; and, some special concerns about electronic texts. The third testimony came from librarians who select indexes and abstracts and also provide Reference services. Appendices to this report include minutes of the meetings with the experts (Appendix A), a list of proposed data elements to be used in the Resource Finder (Appendix B), and recommendations made to the Resource Finder Committee (Appendix C). Below are summaries of the key elements.
    Date
    21. 4.2002 10:22:31

Years

Languages

  • e 414
  • d 169
  • a 3
  • i 3
  • el 2
  • f 1
  • nl 1
  • More… Less…

Types

  • a 288
  • p 25
  • r 14
  • s 14
  • i 12
  • n 7
  • m 6
  • x 5
  • b 3
  • More… Less…

Themes