Search (18 results, page 1 of 1)

  • × type_ss:"r"
  • × year_i:[2010 TO 2020}
  1. Deokattey, S.; Sharma, S.B.K.; Kumar, G.R.; Bhanumurthy, K.: Knowledge organization research : an overview (2015) 0.03
    0.025188856 = product of:
      0.05037771 = sum of:
        0.05037771 = sum of:
          0.00669738 = weight(_text_:a in 2092) [ClassicSimilarity], result of:
            0.00669738 = score(doc=2092,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.12611452 = fieldWeight in 2092, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2092)
          0.043680333 = weight(_text_:22 in 2092) [ClassicSimilarity], result of:
            0.043680333 = score(doc=2092,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.2708308 = fieldWeight in 2092, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2092)
      0.5 = coord(1/2)
    
    Abstract
    The object of this literature review is to provide a historical perspective of R and D work in the area of Knowledge Organization (KO). This overview/summarization will provide information on major areas of KO. Journal articles published in core areas of KO: (Classification, Indexing, Thesauri and Taxonomies, Internet and Subject approach to information in the electronic era and Ontologies will be predominantly covered in this literature review. Coverage in this overview may not be completely exhaustive, but it succinctly showcases major developments in the area of KO. This review is a good source of additional reading material on KO apart from prescribed reading material on KO
    Date
    22. 6.2015 16:13:38
  2. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.02
    0.01560012 = product of:
      0.03120024 = sum of:
        0.03120024 = product of:
          0.06240048 = sum of:
            0.06240048 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.06240048 = score(doc=5576,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13.12.2017 14:17:22
  3. Tober, M.; Hennig, L.; Furch, D.: SEO Ranking-Faktoren und Rang-Korrelationen 2014 : Google Deutschland (2014) 0.01
    0.012480095 = product of:
      0.02496019 = sum of:
        0.02496019 = product of:
          0.04992038 = sum of:
            0.04992038 = weight(_text_:22 in 1484) [ClassicSimilarity], result of:
              0.04992038 = score(doc=1484,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.30952093 = fieldWeight in 1484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1484)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 9.2014 14:45:22
  4. Positionspapier zur Weiterentwicklung der Bibliotheksverbünde als Teil einer überregionalen Informationsinfrastruktur (2011) 0.01
    0.00780006 = product of:
      0.01560012 = sum of:
        0.01560012 = product of:
          0.03120024 = sum of:
            0.03120024 = weight(_text_:22 in 4291) [ClassicSimilarity], result of:
              0.03120024 = score(doc=4291,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19345059 = fieldWeight in 4291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4291)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7. 2.2011 19:52:22
  5. Förderung von Informationsinfrastrukturen für die Wissenschaft : Ein Positionspapier der Deutschen Forschungsgemeinschaft (2018) 0.01
    0.00780006 = product of:
      0.01560012 = sum of:
        0.01560012 = product of:
          0.03120024 = sum of:
            0.03120024 = weight(_text_:22 in 4178) [ClassicSimilarity], result of:
              0.03120024 = score(doc=4178,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19345059 = fieldWeight in 4178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4178)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2018 17:30:43
  6. Wehling, E.: Framing-Manual : Unser gemeinsamer freier Rundfunk ARD (2019) 0.01
    0.00780006 = product of:
      0.01560012 = sum of:
        0.01560012 = product of:
          0.03120024 = sum of:
            0.03120024 = weight(_text_:22 in 4997) [ClassicSimilarity], result of:
              0.03120024 = score(doc=4997,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19345059 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2019 9:26:20
  7. Hochschule im digitalen Zeitalter : Informationskompetenz neu begreifen - Prozesse anders steuern (2012) 0.01
    0.0062400475 = product of:
      0.012480095 = sum of:
        0.012480095 = product of:
          0.02496019 = sum of:
            0.02496019 = weight(_text_:22 in 506) [ClassicSimilarity], result of:
              0.02496019 = score(doc=506,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15476047 = fieldWeight in 506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.12.2012 17:22:26
  8. Kaytoue, M.; Kuznetsov, S.O.; Assaghir, Z.; Napoli, A.: Embedding tolerance relations in concept lattices : an application in information fusion (2010) 0.00
    0.0030491136 = product of:
      0.006098227 = sum of:
        0.006098227 = product of:
          0.012196454 = sum of:
            0.012196454 = weight(_text_:a in 4843) [ClassicSimilarity], result of:
              0.012196454 = score(doc=4843,freq=26.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22966442 = fieldWeight in 4843, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4843)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Formal Concept Analysis (FCA) is a well founded mathematical framework used for conceptual classication and knowledge management. Given a binary table describing a relation between objects and attributes, FCA consists in building a set of concepts organized by a subsumption relation within a concept lattice. Accordingly, FCA requires to transform complex data, e.g. numbers, intervals, graphs, into binary data leading to loss of information and poor interpretability of object classes. In this paper, we propose a pre-processing method producing binary data from complex data taking advantage of similarity between objects. As a result, the concept lattice is composed of classes being maximal sets of pairwise similar objects. This method is based on FCA and on a formalization of similarity as a tolerance relation (reexive and symmetric). It applies to complex object descriptions and especially here to interval data. Moreover, it can be applied to any kind of structured data for which a similarity can be dened (sequences, graphs, etc.). Finally, an application highlights that the resulting concept lattice plays an important role in information fusion problem, as illustrated with a real-world example in agronomy.
  9. Eckert, K: ¬The ICE-map visualization (2011) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 4743) [ClassicSimilarity], result of:
              0.012102271 = score(doc=4743,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 4743, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4743)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we describe in detail the Information Content Evaluation Map (ICE-Map Visualization, formerly referred to as IC Difference Analysis). The ICE-Map Visualization is a visual data mining approach for all kinds of concept hierarchies that uses statistics about the concept usage to help a user in the evaluation and maintenance of the hierarchy. It consists of a statistical framework that employs the the notion of information content from information theory, as well as a visualization of the hierarchy and the result of the statistical analysis by means of a treemap.
  10. Horridge, M.; Brandt, S.: ¬A practical guide to building OWL ontologies using Protégé 4 and CO-ODE Tools (2011) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 4938) [ClassicSimilarity], result of:
              0.009374379 = score(doc=4938,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 4938, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4938)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This guide introduces Protégé 4 for creating OWL ontologies. Chapter 3 gives a brief overview of the OWL ontology language. Chapter 4 focuses on building an OWL-DL ontology and using a Description Logic Reasoner to check the consistency of the ontology and automatically compute the ontology class hierarchy. Chapter 7 describes some OWL constructs such as hasValue Restrictions and Enumerated classes, which aren't directly used in the main tutorial.
  11. Knowledge graphs : new directions for knowledge representation on the Semantic Web (2019) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 51) [ClassicSimilarity], result of:
              0.008285859 = score(doc=51,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 51, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=51)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The increasingly pervasive nature of the Web, expanding to devices and things in everydaylife, along with new trends in Artificial Intelligence call for new paradigms and a new look onKnowledge Representation and Processing at scale for the Semantic Web. The emerging, but stillto be concretely shaped concept of "Knowledge Graphs" provides an excellent unifying metaphorfor this current status of Semantic Web research. More than two decades of Semantic Webresearch provides a solid basis and a promising technology and standards stack to interlink data,ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphsas such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises- while often inspired by - limited to the core Semantic Web stack. This report documents theprogram and the outcomes of Dagstuhl Seminar 18371 "Knowledge Graphs: New Directions forKnowledge Representation on the Semantic Web", where a group of experts from academia andindustry discussed fundamental questions around these topics for a week in early September 2018,including the following: what are knowledge graphs? Which applications do we see to emerge?Which open research questions still need be addressed and which technology gaps still need tobe closed?
    Editor
    Polleres, A.
  12. Riva, P.; Boeuf, P. le; Zumer, M.: IFLA Library Reference Model : a conceptual model for bibliographic information (2017) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 5179) [ClassicSimilarity], result of:
              0.008202582 = score(doc=5179,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 5179, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Definition of a conceptual reference model to provide a framework for the analysis of non-administrative metadata relating to library resources. The resulting model definition was approved by the FRBR Review Group (November 2016), and then made available to the Standing Committees of the Sections on Cataloguing and Subject Analysis & Access, as well as to the ISBD Review Group, for comment in December 2016. The final document was approved by the IFLACommittee on Standards (August 2017).
  13. British Library / FAST/Dewey Review Group: Consultation on subject indexing and classification standards applied by the British Library (2015) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 2810) [ClassicSimilarity], result of:
              0.007030784 = score(doc=2810,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 2810, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2810)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A broad-based review of the subject and classification schemes used on British Library records began in late 2014. The review was undertaken in response to a number of drivers including: - An increasing demand on available resources due to the rapidly expanding digital publishing arena, and continuing steady state in print publication patterns - Increased demands on metadata to meet changing audience expectations.
    Content
    The Library is consulting with stakeholders concerning the potential impact of these proposals. No firm decisions have yet been taken regarding either of these standards. FAST 1. The British Library proposes to adopt FAST selectively to extend the scope of subject indexing of current and legacy content. 2. The British Library proposes to implement FAST as a replacement for LCSH in all current cataloguing, subject to mitigation of the risks identified above, in particular the question of sustainability. DDC 3. The British Library proposes to implement Abridged DDC selectively to extend the scope of subject indexing of current and legacy content.
  14. Gradmann, S.: Knowledge = Information in context : on the importance of semantic contextualisation in Europeana (2010) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 3475) [ClassicSimilarity], result of:
              0.0054123 = score(doc=3475,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 3475, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3475)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    "Europeana.eu is about ideas and inspiration. It links you to 6 million digital items." This is the opening statement taken from the Europeana WWW-site (http://www.europeana.eu/portal/aboutus.html), and it clearly is concerned with the mission of Europeana - without, however, being over-explicit as to the precise nature of that mission. Europeana's current logo, too, has a programmatic aspect: the slogan "Think Culture" clearly again is related to Europeana's mission and at same time seems somewhat closer to the point: 'thinking' culture evokes notions like conceptualisation, reasoning, semantics and the like. Still, all this remains fragmentary and insufficient to actually clarify the functional scope and mission of Europeana. In fact, the author of the present contribution is convinced that Europeana has too often been described in terms of sheer quantity, as a high volume aggregation of digital representations of cultural heritage objects without sufficiently stressing the functional aspects of this endeavour. This conviction motivates the present contribution on some of the essential functional aspects of Europeana making clear that such a contribution - even if its author is deeply involved in building Europeana - should not be read as an official statement of the project or of the European Commission (which it is not!) - but as the personal statement from an information science perspective! From this perspective the opening statement is that Europeana is much more than a machine for mechanical accumulation of object representations but that one of its main characteristics should be to enable the generation of knowledge pertaining to cultural artefacts. The rest of the paper is about the implications of this initial statement in terms of information science, on the way we technically prepare to implement the necessary data structures and functionality and on the novel functionality Europeana will offer based on these elements and which go well beyond the 'traditional' digital library paradigm. However, prior to exploring these areas it may be useful to recall the notion of 'knowledge' that forms the basis of this contribution and which in turn is part of the well known continuum reaching from data via information and knowledge to wisdom.
  15. Horch, A.; Kett, H.; Weisbecker, A.: Semantische Suchsysteme für das Internet : Architekturen und Komponenten semantischer Suchmaschinen (2013) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 4063) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=4063,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 4063, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4063)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Klingenberg, A.: Referenzrahmen Informationskompetenz (2016) 0.00
    8.4567186E-4 = product of:
      0.0016913437 = sum of:
        0.0016913437 = product of:
          0.0033826875 = sum of:
            0.0033826875 = weight(_text_:a in 3271) [ClassicSimilarity], result of:
              0.0033826875 = score(doc=3271,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.06369744 = fieldWeight in 3271, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3271)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Darstellung der CrissCross-Mappingrelationen im Rahmen des Semantic Web (2010) 0.00
    5.9197034E-4 = product of:
      0.0011839407 = sum of:
        0.0011839407 = product of:
          0.0023678814 = sum of:
            0.0023678814 = weight(_text_:a in 4285) [ClassicSimilarity], result of:
              0.0023678814 = score(doc=4285,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.044588212 = fieldWeight in 4285, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4285)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Im Rahmen des CrissCross-Projekts wurde ein multilinguales, thesaurusbasiertes Recherchevokabular zu heterogen erschlossenen Dokumenten geschaffen. Dieses Recherchevokabular besteht unter anderem aus den Sachschlagwörtern der Schlagwortnormdatei (SWD) und den Notationen der Dewey-Dezimalklassifikation (DDC). Die Schlagwortnormdatei bietet einen normierten, terminologisch kontrollierten Wortschatz an. Sie enthält Schlagwörter aus allen Fachgebieten und Schlagwortkategorien, die durch die beteiligten Bibliotheken zur verbalen Inhaltserschließung genutzt und dabei täglich aktualisiert werden. Unter anderem wird die Schlagwortnormdatei auch zur verbalen Erschließung der Deutschen Nationalbibliografie verwendet. Während der verbalen Erschließung wird jedem Inhalt mindestens eine Schlagwortfolge zugewiesen. Schlagwortfolgen werden dazu benutzt, um die komplexen Themen eines Dokuments zu beschreiben. Zusätzlich wird für den Neuerscheinungsdienst der Deutschen Nationalbibliografie seit dem Jahrgang 2004 jeder Titel noch mindestens einer Sachgruppe zugewiesen. Die Strukturierung der Sachgruppen orientiert sich dabei weitestgehend an der zweiten Ebene der Dewey-Dezimalklassifikation. Die Dewey-Dezimalklassifikation ist eine international weit verbreitete Universalklassifikation. Von ihr existieren zahlreiche Übersetzungen, u. a. ins Deutsche. Seit Januar 2006 wird die DDC gemeinsam mit der SWD zur Inhaltserschließung in der Deutschen Nationalbibliografie verwendet. Bei der Inhaltserschließung mit Hilfe der DDC wird jedem Inhalt eine einzelne DDC-Notation zugewiesen. Komplexe Themen werden dabei durch die Synthese von zwei oder mehr Notationen zu einer neuen Notation dargestellt. Somit ist die DDC-Notation sowohl vergleichbar mit SWD-Schlagwörtern als auch mit ganzen Schlagwortfolgen.
  18. Haffner, A.: Internationalisierung der GND durch das Semantic Web (2012) 0.00
    5.9197034E-4 = product of:
      0.0011839407 = sum of:
        0.0011839407 = product of:
          0.0023678814 = sum of:
            0.0023678814 = weight(_text_:a in 318) [ClassicSimilarity], result of:
              0.0023678814 = score(doc=318,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.044588212 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.5 = coord(1/2)