Search (59 results, page 1 of 3)

  • × type_ss:"r"
  1. Kaytoue, M.; Kuznetsov, S.O.; Assaghir, Z.; Napoli, A.: Embedding tolerance relations in concept lattices : an application in information fusion (2010) 0.04
    0.0448143 = product of:
      0.0896286 = sum of:
        0.06843241 = weight(_text_:data in 4843) [ClassicSimilarity], result of:
          0.06843241 = score(doc=4843,freq=14.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.46216056 = fieldWeight in 4843, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4843)
        0.021196188 = product of:
          0.042392377 = sum of:
            0.042392377 = weight(_text_:processing in 4843) [ClassicSimilarity], result of:
              0.042392377 = score(doc=4843,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.22363065 = fieldWeight in 4843, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4843)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Formal Concept Analysis (FCA) is a well founded mathematical framework used for conceptual classication and knowledge management. Given a binary table describing a relation between objects and attributes, FCA consists in building a set of concepts organized by a subsumption relation within a concept lattice. Accordingly, FCA requires to transform complex data, e.g. numbers, intervals, graphs, into binary data leading to loss of information and poor interpretability of object classes. In this paper, we propose a pre-processing method producing binary data from complex data taking advantage of similarity between objects. As a result, the concept lattice is composed of classes being maximal sets of pairwise similar objects. This method is based on FCA and on a formalization of similarity as a tolerance relation (reexive and symmetric). It applies to complex object descriptions and especially here to interval data. Moreover, it can be applied to any kind of structured data for which a similarity can be dened (sequences, graphs, etc.). Finally, an application highlights that the resulting concept lattice plays an important role in information fusion problem, as illustrated with a real-world example in agronomy.
    Series
    Knowledge and data representation and management; no.7353
  2. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.04
    0.04172619 = product of:
      0.08345238 = sum of:
        0.05173004 = weight(_text_:data in 5576) [ClassicSimilarity], result of:
          0.05173004 = score(doc=5576,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34936053 = fieldWeight in 5576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=5576)
        0.03172234 = product of:
          0.06344468 = sum of:
            0.06344468 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.06344468 = score(doc=5576,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Diese Publikation beschreibt die Zusammenhänge zwischen wissenshaltigen begriffsorientierten Terminologien, Ontologien, Big Data und künstliche Intelligenz.
    Date
    13.12.2017 14:17:22
  3. Leung, C.H.C.; Hibler, J.N.D.: Architecture of a pictorial database management system (1991) 0.03
    0.032942846 = product of:
      0.06588569 = sum of:
        0.036211025 = weight(_text_:data in 4797) [ClassicSimilarity], result of:
          0.036211025 = score(doc=4797,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 4797, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4797)
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 4797) [ClassicSimilarity], result of:
              0.05934933 = score(doc=4797,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 4797, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4797)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Addresses the problems of content retrieval in the construction of pictorial database management systems. Presents a generalisable architecture for the effective identification of specific pictures from a large collection and describes prototype system based on this architecture successfully implemented. The architecture consists of 3 main components: picture description; picture indexing and filing, and picture retrieval. The description of pictures is facilitated by using the main semantic concepts employed in the entity-attribute-relationship model. The chief function of the picture indexing and filing component is to convert the logical representations into a relational data format to prepare for subsequent processing initiated by picture queries
  4. Knowledge graphs : new directions for knowledge representation on the Semantic Web (2019) 0.03
    0.028887425 = product of:
      0.05777485 = sum of:
        0.03657866 = weight(_text_:data in 51) [ClassicSimilarity], result of:
          0.03657866 = score(doc=51,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 51, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=51)
        0.021196188 = product of:
          0.042392377 = sum of:
            0.042392377 = weight(_text_:processing in 51) [ClassicSimilarity], result of:
              0.042392377 = score(doc=51,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.22363065 = fieldWeight in 51, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=51)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The increasingly pervasive nature of the Web, expanding to devices and things in everydaylife, along with new trends in Artificial Intelligence call for new paradigms and a new look onKnowledge Representation and Processing at scale for the Semantic Web. The emerging, but stillto be concretely shaped concept of "Knowledge Graphs" provides an excellent unifying metaphorfor this current status of Semantic Web research. More than two decades of Semantic Webresearch provides a solid basis and a promising technology and standards stack to interlink data,ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphsas such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises- while often inspired by - limited to the core Semantic Web stack. This report documents theprogram and the outcomes of Dagstuhl Seminar 18371 "Knowledge Graphs: New Directions forKnowledge Representation on the Semantic Web", where a group of experts from academia andindustry discussed fundamental questions around these topics for a week in early September 2018,including the following: what are knowledge graphs? Which applications do we see to emerge?Which open research questions still need be addressed and which technology gaps still need tobe closed?
  5. Buchbinder, R.; Weidemüller, H.U.; Tiedemann, E.: Biblio-Data, die nationalbibliographische Datenbank der Deutschen Bibliothek (1979) 0.02
    0.023314415 = product of:
      0.09325766 = sum of:
        0.09325766 = weight(_text_:data in 4) [ClassicSimilarity], result of:
          0.09325766 = score(doc=4,freq=26.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.6298187 = fieldWeight in 4, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4)
      0.25 = coord(1/4)
    
    Abstract
    Die deutschen nationalbibliographische Datenbank Biblio-Data wird in Teil A mit ihren Grundlagen vorgestellt. Biblio-Data basiert auf dem Information-Retrieval-System STAIRS von IBM, das durch Zusatzprogramme der ZMD verbessert wurde. Das Hauptgewicht dieses Beitrags liegt auf der Erörterung des Biblio-Data-Konzepts, nach dem die Daten der Deutschen Bibliographie retrievalgerecht aufbereitet und eingegeben werden. Auf Eigenarten, Probleme und Mängel, die daraus entstehen, dass Biblio-Data auf nationalbibliographischen Daten aufbaut, wird ausführlich eingegangen. Zwei weitere Beiträge zeigen an einigen Beispielen die vielfältigen Suchmöglichkeiten mit Biblio-Data, um zu verdeutlichen, dass damit nicht nur viel schenellere, sondern auch bessere Recherchen ausgeführt werden können. Teil B weist nach, dass Biblio-Data weder die Innhaltsanalyse noch die Zuordnung von schlagwörtern automatisieren kann. Im derzeitigen Einsatzstadium fällt Biblio-Data die Aufgabe zu, die weiterhin knventionall-intellektuell auszuführende Sacherschließung durch retrospektive Recherchen, insbesonderr durch den wesentlich erleichterten Zugriff auf frühere Indexierungsergebnisse, zu unterstützen. Teil C schildert die praktische Arbeit mit Biblio-Data bei bibliographischen Ermittlungen und Literaturzusammenstellunen und kommt zu dem Ergebnis, dass effektive und bibliographische Arbeit aus der sinnvollen Kombination von Datenbank-Retrieval und konventioneller Suche bestehet.
    Content
    Enthält die Beiträge: Buchbinder, R.: Grundlagen von Biblio-Data (S.11-68); Weidemüller, H.U.: Biblio-Data in der Sacherschließung der Deutschen Bibliothek (S.69-105); Tiedemann, E.: Biblio-Data in der bibliographischen Auskunft der Deutschen Bibliothek (S.107-123)
    Object
    Biblio-Data
  6. Robinson, B.: Mixed mode document research : the collected reports (1992) 0.02
    0.022399765 = product of:
      0.08959906 = sum of:
        0.08959906 = weight(_text_:data in 4796) [ClassicSimilarity], result of:
          0.08959906 = score(doc=4796,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.60511017 = fieldWeight in 4796, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=4796)
      0.25 = coord(1/4)
    
    Abstract
    Presents the collected reports of work carried out under British Library grant no. SI/G/880 ('Extensions of mixed mode data bases to support temporal data types'). The studies were concerned with the storage and retrieval of multimedia data such as sound and motion picture scenes
  7. Leeves, J.: Harmonising standards for bibliographic data interchange (1993) 0.02
    0.01828933 = product of:
      0.07315732 = sum of:
        0.07315732 = weight(_text_:data in 6031) [ClassicSimilarity], result of:
          0.07315732 = score(doc=6031,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.49407038 = fieldWeight in 6031, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=6031)
      0.25 = coord(1/4)
    
    Abstract
    Reviews the provision for bibliographic data within EDIFACT, compares those provisions with the BIC draft standards for bibliographic databases and examines the implications for MARC based standards. Outlines the role of the major players involved. Describes stanbdards dealing with EDIFACT in greatest detail. Describes the library systems using the records
  8. Morley, N.: ¬The administration of permissions procedures via electronic data interchange (EDI) (1994) 0.02
    0.01828933 = product of:
      0.07315732 = sum of:
        0.07315732 = weight(_text_:data in 7017) [ClassicSimilarity], result of:
          0.07315732 = score(doc=7017,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.49407038 = fieldWeight in 7017, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=7017)
      0.25 = coord(1/4)
    
    Abstract
    Presents the results of a combined survey of publishers and information users, predominantly librarians, to investigate the place and value of electronic data interchange (EDI) as a facility to improve copyright permissions clearance procedures
  9. Lawrence, G.S.; Matthews, J.R.: Detailed data analysis of the CLR online catalog project : final report for the Council on Library Resources (1984) 0.02
    0.018105512 = product of:
      0.07242205 = sum of:
        0.07242205 = weight(_text_:data in 2420) [ClassicSimilarity], result of:
          0.07242205 = score(doc=2420,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48910472 = fieldWeight in 2420, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.109375 = fieldNorm(doc=2420)
      0.25 = coord(1/4)
    
  10. Leeves, J.: EDIBIB: harmonising standards for bibliographic data interchange : a report prepared for Book Industry Communication (1993) 0.02
    0.017919812 = product of:
      0.07167925 = sum of:
        0.07167925 = weight(_text_:data in 9) [ClassicSimilarity], result of:
          0.07167925 = score(doc=9,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48408815 = fieldWeight in 9, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=9)
      0.25 = coord(1/4)
    
    Abstract
    Report commissioned by Book Industry Communications (BIC) and funded by the British National Bibliography Research Fund and the Britsh National Bibliographic Service. The aims of the project were: to review the provisions for bibliographic data within EDIFACT (Electronic Data Interchange for Administration, Commerce and Transport); to compare those provisions with the BIC draft standards for bibliographic databases and the book publishing industry, and to examine the implications for MARC based databases, such as UKMARC
  11. Philip, G.; Crookes, D.; Juhasz, Z.: Development and implementation of an photographic database using a network of transputers (1994) 0.01
    0.014685151 = product of:
      0.058740605 = sum of:
        0.058740605 = product of:
          0.11748121 = sum of:
            0.11748121 = weight(_text_:processing in 944) [ClassicSimilarity], result of:
              0.11748121 = score(doc=944,freq=6.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.61974347 = fieldWeight in 944, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0625 = fieldNorm(doc=944)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Reports results of a project to investigate the use of concurrent processing technology, in the form of transputers, for the processing of a collection of historical photographs housed in the Ulster Museum. The objectives of the exercise were: to create an image database to provide rapid access to individual items; and to study the application of advanced image processing techniques in the manipulation of photographs
  12. Barker, P.: ¬An examination of the use of the OSI Directory for accessing bibliographic information : project ABDUX (1993) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 7310) [ClassicSimilarity], result of:
          0.053759433 = score(doc=7310,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 7310, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=7310)
      0.25 = coord(1/4)
    
    Abstract
    Describes the work of the ABDUX project, containing a brief description of the rationale for using X.500 for access to bibliographic information. Outlines the project's design work and a demonstration system. Reviews the standards applicable to bibliographic data and library OPACs. Highlights difficulties found when handling bibliographic data in library systems. Discusses the service requirements of OPACs for accessing bibliographic, discussing how X.500 Directory services may be used. Suggests the DIT structures that coulb be used for storing both bibliographic information and descriptions on information resources in general in the directory. Describes the way in which the model of bibliographic data is presented. Outlines the syntax of ASN.1 and how records and fields may be described in terms of X.500 object classes and attribute types. Details the mapping of MARC format into an X.500 compatible form. Provides the schema information for representing research notes and archives, not covered by MARC definitions. Examines the success in implementing the designs and loos ahead to future possibilities
  13. Sweeney, R.: Standard book subject categories for EDI (1994) 0.01
    0.01293251 = product of:
      0.05173004 = sum of:
        0.05173004 = weight(_text_:data in 893) [ClassicSimilarity], result of:
          0.05173004 = score(doc=893,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34936053 = fieldWeight in 893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=893)
      0.25 = coord(1/4)
    
    Abstract
    Reports the results of an investigation into existing systems of subject categories at present in use in the bibliographic community. Makes recommendations for establishing a standard set of book subject categories for Electronic Data Interchange
  14. Fresko, M.: Sources of digital information (1994) 0.01
    0.012802532 = product of:
      0.051210128 = sum of:
        0.051210128 = weight(_text_:data in 7964) [ClassicSimilarity], result of:
          0.051210128 = score(doc=7964,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 7964, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7964)
      0.25 = coord(1/4)
    
    Abstract
    Presents the results of a study, carried out by The Marc Fresko Consultancy, Kenley, UK, to explore the availability of digital information worldwide and intended to be of use to the Britsh Library, as it moves towards the provision of more services based on digital data storage. The study involved a series of interlocking surveys and reports details of over 200 digital data sources with descriptions, in varying degrees of detail, which are extensively indexed in 4 separate idexes. Concludes that the universe of digital sources is too large to be quantified usefully and recommends that any future work be focused on specific areas of interst. Suggests a number of possible future actions: listing sources in areas of interest; promoting or facilitating good archiving parctices for digital collections; and providing access to digital collections at the British Library
  15. Modelle und Konzepte der Beitragsdokumentation und Filmarchivierung im Lokalfernsehsender Hamburg I : Endbericht (1996) 0.01
    0.012688936 = product of:
      0.050755743 = sum of:
        0.050755743 = product of:
          0.101511486 = sum of:
            0.101511486 = weight(_text_:22 in 7383) [ClassicSimilarity], result of:
              0.101511486 = score(doc=7383,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.61904186 = fieldWeight in 7383, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=7383)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 2.1997 19:46:30
  16. Haffner, A.: Internationalisierung der GND durch das Semantic Web (2012) 0.01
    0.011975672 = product of:
      0.04790269 = sum of:
        0.04790269 = weight(_text_:data in 318) [ClassicSimilarity], result of:
          0.04790269 = score(doc=318,freq=14.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.32351238 = fieldWeight in 318, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.02734375 = fieldNorm(doc=318)
      0.25 = coord(1/4)
    
    Abstract
    Die Gemeinsame Normdatei (GND) ist seit April 2012 die Datei, die die im deutschsprachigen Bibliothekswesen verwendeten Normdaten enthält. Folglich muss auf Basis dieser Daten eine Repräsentation für die Darstellung als Linked Data im Semantic Web etabliert werden. Neben der eigentlichen Bereitstellung von GND-Daten im Semantic Web sollen die Daten mit bereits als Linked Data vorhandenen Datenbeständen (DBpedia, VIAF etc.) verknüpft und nach Möglichkeit kompatibel sein, wodurch die GND einem internationalen und spartenübergreifenden Publikum zugänglich gemacht wird. Dieses Dokument dient vor allem zur Beschreibung, wie die GND-Linked-Data-Repräsentation entstand und dem Weg zur Spezifikation einer eignen Ontologie. Hierfür werden nach einer kurzen Einführung in die GND die Grundprinzipien und wichtigsten Standards für die Veröffentlichung von Linked Data im Semantic Web vorgestellt, um darauf aufbauend existierende Vokabulare und Ontologien des Bibliothekswesens betrachten zu können. Anschließend folgt ein Exkurs in das generelle Vorgehen für die Bereitstellung von Linked Data, wobei die so oft zitierte Open World Assumption kritisch hinterfragt und damit verbundene Probleme insbesondere in Hinsicht Interoperabilität und Nachnutzbarkeit aufgedeckt werden. Um Probleme der Interoperabilität zu vermeiden, wird den Empfehlungen der Library Linked Data Incubator Group [LLD11] gefolgt.
    Im Kapitel Anwendungsprofile als Basis für die Ontologieentwicklung wird die Spezifikation von Dublin Core Anwendungsprofilen kritisch betrachtet, um auszumachen wann und in welcher Form sich ihre Verwendung bei dem Vorhaben Bereitstellung von Linked Data anbietet. In den nachfolgenden Abschnitten wird die GND-Ontologie, welche als Standard für die Serialisierung von GND-Daten im Semantic Web dient, samt Modellierungsentscheidungen näher vorgestellt. Dabei wird insbesondere der Technik des Vocabulary Alignment eine prominente Position eingeräumt, da darin ein entscheidender Mechanismus zur Steigerung der Interoperabilität und Nachnutzbarkeit gesehen wird. Auch wird sich mit der Verlinkung zu externen Datensets intensiv beschäftigt. Hierfür wurden ausgewählte Datenbestände hinsichtlich ihrer Qualität und Aktualität untersucht und Empfehlungen für die Implementierung innerhalb des GND-Datenbestandes gegeben. Abschließend werden eine Zusammenfassung und ein Ausblick auf weitere Schritte gegeben.
  17. Adler, R.; Ewing, J.; Taylor, P.: Citation statistics : A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS) (2008) 0.01
    0.011639258 = product of:
      0.04655703 = sum of:
        0.04655703 = weight(_text_:data in 2417) [ClassicSimilarity], result of:
          0.04655703 = score(doc=2417,freq=18.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.31442446 = fieldWeight in 2417, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2417)
      0.25 = coord(1/4)
    
    Abstract
    This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using "simple and objective" methods is increasingly prevalent today. The "simple and objective" methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded. - Relying on statistics is not more accurate when the statistics are improperly used. Indeed, statistics can mislead when they are misapplied or misunderstood. Much of modern bibliometrics seems to rely on experience and intuition about the interpretation and validity of citation statistics. - While numbers appear to be "objective", their objectivity can be illusory. The meaning of a citation can be even more subjective than peer review. Because this subjectivity is less obvious for citations, those who use citation data are less likely to understand their limitations. - The sole reliance on citation data provides at best an incomplete and often shallow understanding of research - an understanding that is valid only when reinforced by other judgments. Numbers are not inherently superior to sound judgments.
    Using citation data to assess research ultimately means using citation-based statistics to rank things.journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused. - For journals, the impact factor is most often used for ranking. This is a simple average derived from the distribution of citations for a collection of articles in the journal. The average captures only a small amount of information about that distribution, and it is a rather crude statistic. In addition, there are many confounding factors when judging journals by citations, and any comparison of journals requires caution when using impact factors. Using the impact factor alone to judge a journal is like using weight alone to judge a person's health. - For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs. -For individual scientists, complete citation records can be difficult to compare. As a consequence, there have been attempts to find simple statistics that capture the full complexity of a scientist's citation record with a single number. The most notable of these is the h-index, which seems to be gaining in popularity. But even a casual inspection of the h-index and its variants shows that these are naive attempts to understand complicated citation records. While they capture a small amount of information about the distribution of a scientist's citations, they lose crucial information that is essential for the assessment of research.
    The validity of statistics such as the impact factor and h-index is neither well understood nor well studied. The connection of these statistics with research quality is sometimes established on the basis of "experience." The justification for relying on them is that they are "readily available." The few studies of these statistics that were done focused narrowly on showing a correlation with some other measure of quality rather than on determining how one can best derive useful information from citation data. We do not dismiss citation statistics as a tool for assessing the quality of research.citation data and statistics can provide some valuable information. We recognize that assessment must be practical, and for this reason easily-derived citation statistics almost surely will be part of the process. But citation data provide only a limited and incomplete view of research quality, and the statistics derived from citation data are sometimes poorly understood and misused. Research is too important to measure its value with only a single coarse tool. We hope those involved in assessment will read both the commentary and the details of this report in order to understand not only the limitations of citation statistics but also how better to use them. If we set high standards for the conduct of science, surely we should set equally high standards for assessing its quality.
  18. Resource Description and Access (2008) 0.01
    0.011199882 = product of:
      0.04479953 = sum of:
        0.04479953 = weight(_text_:data in 2436) [ClassicSimilarity], result of:
          0.04479953 = score(doc=2436,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.30255508 = fieldWeight in 2436, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2436)
      0.25 = coord(1/4)
    
    Abstract
    RDA provides a set of guidelines and instructions on formulating data to support resource discovery. The data created using RDA to describe a resource are designed to assist users performing the following tasks: find-i.e., to find resources that correspond to the user's stated search criteria: identify-i.e., to confirm that the resource described corresponds to the resource sought, or to distinguish between two or more resources with similar characteristics select-i.e., to select a resource that is appropriate to the user's needs obtain-i.e., to acquire or access the resource described. The data created using RDA to describe an entity associated with a resource (a person, family, corporate body, concept, etc.) are designed to assist users performing the following tasks: find-i.e., to find information on that entity and on resources associated with the entity identify-i.e., to confirm that the entity described corresponds to the entity sought, or to distinguish between two or more entities with similar names, etc. clarify-i.e., to clarify the relationship between two or more such entities, or to clarify the relationship between the entity described and a name by which that entity is known understand-i.e., to understand why a particular name or title, or form of name or title, has been chosen as the preferred name or title for the entity.
  19. Wheelbarger, J.J.; Clouse, R.W.: ¬A comparision of a manual library reclassification project with a computer automated library reclassification project (1975) 0.01
    0.011102819 = product of:
      0.044411276 = sum of:
        0.044411276 = product of:
          0.08882255 = sum of:
            0.08882255 = weight(_text_:22 in 3473) [ClassicSimilarity], result of:
              0.08882255 = score(doc=3473,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.5416616 = fieldWeight in 3473, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3473)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Pages
    22 S
  20. Matthews, J.R.; Parker, M.R.: Local Area Networks and Wide Area Networks for libraries (1995) 0.01
    0.011102819 = product of:
      0.044411276 = sum of:
        0.044411276 = product of:
          0.08882255 = sum of:
            0.08882255 = weight(_text_:22 in 2656) [ClassicSimilarity], result of:
              0.08882255 = score(doc=2656,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.5416616 = fieldWeight in 2656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2656)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30.11.1995 20:53:22

Languages

  • e 47
  • d 9

Types