Search (241 results, page 1 of 13)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.57
    0.56524754 = product of:
      0.9891832 = sum of:
        0.09891832 = product of:
          0.29675496 = sum of:
            0.29675496 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.29675496 = score(doc=1826,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.29675496 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.29675496 = score(doc=1826,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.29675496 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.29675496 = score(doc=1826,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.29675496 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.29675496 = score(doc=1826,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5714286 = coord(4/7)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.45
    0.4521981 = product of:
      0.7913466 = sum of:
        0.07913466 = product of:
          0.23740397 = sum of:
            0.23740397 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.23740397 = score(doc=230,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.23740397 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23740397 = score(doc=230,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.23740397 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23740397 = score(doc=230,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.23740397 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23740397 = score(doc=230,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.5714286 = coord(4/7)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.28
    0.28262377 = product of:
      0.4945916 = sum of:
        0.04945916 = product of:
          0.14837748 = sum of:
            0.14837748 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.14837748 = score(doc=4388,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.14837748 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.14837748 = score(doc=4388,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.14837748 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.14837748 = score(doc=4388,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.14837748 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.14837748 = score(doc=4388,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.5714286 = coord(4/7)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Priss, U.: Faceted knowledge representation (1999) 0.03
    0.032156922 = product of:
      0.112549216 = sum of:
        0.09482904 = weight(_text_:interpretation in 2654) [ClassicSimilarity], result of:
          0.09482904 = score(doc=2654,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.035440356 = score(doc=2654,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  5. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.03
    0.02526938 = product of:
      0.17688565 = sum of:
        0.17688565 = sum of:
          0.116130754 = weight(_text_:anwendung in 3895) [ClassicSimilarity], result of:
            0.116130754 = score(doc=3895,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.6418954 = fieldWeight in 3895, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.09375 = fieldNorm(doc=3895)
          0.06075489 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
            0.06075489 = score(doc=3895,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.46428138 = fieldWeight in 3895, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=3895)
      0.14285715 = coord(1/7)
    
    Date
    24. 8.2005 19:20:22
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  6. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.021057816 = product of:
      0.1474047 = sum of:
        0.1474047 = sum of:
          0.09677563 = weight(_text_:anwendung in 539) [ClassicSimilarity], result of:
            0.09677563 = score(doc=539,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.5349128 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
          0.05062908 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
            0.05062908 = score(doc=539,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.38690117 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
      0.14285715 = coord(1/7)
    
    Date
    26.12.2011 13:22:07
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  7. Prokop, M.: Hans Jonas and the phenomenological continuity of life and mind (2022) 0.01
    0.013684542 = product of:
      0.09579179 = sum of:
        0.09579179 = weight(_text_:interpretation in 1048) [ClassicSimilarity], result of:
          0.09579179 = score(doc=1048,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.44751403 = fieldWeight in 1048, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1048)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper offers a novel interpretation of Hans Jonas' analysis of metabolism, the centrepiece of Jonas' philosophy of organism, in relation to recent controversies regarding the phenomenological dimension of life-mind continuity as understood within 'autopoietic' enactivism (AE). Jonas' philosophy of organism chiefly inspired AE's development of what we might call 'the phenomenological life-mind continuity thesis' (PLMCT), the claim that certain phenomenological features of human experience are central to a proper scientific understanding of both life and mind, and as such central features of all living organisms. After discussing the understanding of PLMCT within AE, and recent criticisms thereof, I develop a reading of Jonas' analysis of metabolism, in light of previous commentators, which emphasizes its systematicity and transcendental flavour. The central thought is that, for Jonas, the attribution of certain phenomenological features is a necessary precondition for our understanding of the possibility of metabolism, rather than being derivable from metabolism itself. I argue that my interpretation strengthens Jonas' contribution to AE's justification for ascribing certain phenomenological features to life across the board. However, it also emphasises the need to complement Jonas' analysis with an explanatory account of organic identity in order to vindicate these phenomenological ascriptions in a scientific context.
  8. Sewing, S.: Bestandserhaltung und Archivierung : Koordinierung auf der Basis eines gemeinsamen Metadatenformates in den deutschen und österreichischen Bibliotheksverbünden (2021) 0.01
    0.01263469 = product of:
      0.088442825 = sum of:
        0.088442825 = sum of:
          0.058065377 = weight(_text_:anwendung in 266) [ClassicSimilarity], result of:
            0.058065377 = score(doc=266,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.3209477 = fieldWeight in 266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.046875 = fieldNorm(doc=266)
          0.030377446 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
            0.030377446 = score(doc=266,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.23214069 = fieldWeight in 266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=266)
      0.14285715 = coord(1/7)
    
    Abstract
    In den Handlungsempfehlungen der Koordinierungsstelle für die Erhaltung des schriftlichen Kulturguts (KEK) von 2015 (KEK-Handlungsempfehlungen) wird ein nationaler Standard bei der Dokumentation von Bestandserhaltung gefordert: "In den Bibliothekskatalogen sollten künftig für den verbundübergreifenden Abgleich Bestandserhaltungsmaßnahmen für die Bestände ab 1851 [.] in standardisierter Form dokumentiert und recherchierbar gemacht werden. Dies bedarf einer gemeinsamen Festlegung mit den Bibliotheksverbünden [.]." In den KEK-Handlungsempfehlungen werden auf der Basis einer im Jahr 2015 erfolgten Erhebung für Monografien fast neun Millionen Bände aus dem Zeitabschnitt 1851-1990 als Pflichtexemplare an Bundes- und Ländereinrichtungen angegeben, die akut vom Papierzerfall bedroht und als erste Stufe einer Gesamtstrategie zu entsäuern sind. Ein Ziel der KEK ist es, standardisierte und zertifizierte Verfahren zur Massenentsäuerung zu fördern. Im Metadatenformat sind zunächst fünf Verfahren der Massenentsäuerung in Form von kontrolliertem Vokabular dokumentiert: DEZ, Mg3/MBG, METE, MgO, MMMC[2]. Mit diesen Angaben, die gezielt selektiert werden können, ist mittel- und langfristig die Anwendung einzelner Verfahren der Massenentsäuerung abrufbar und statistisch auswertbar.
    Date
    22. 5.2021 12:43:05
  9. Tzitzikas, Y.; Spyratos, N.; Constantopoulos, P.; Analyti, A.: Extended faceted ontologies (2002) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 2280) [ClassicSimilarity], result of:
          0.081282035 = score(doc=2280,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 2280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=2280)
      0.14285715 = coord(1/7)
    
    Abstract
    A faceted ontology consists of a set of facets, where each facet consists of a predefined set of terms structured by a subsumption relation. We propose two extensions of faceted ontologies, which allow inferring conjunctions of terms that are valid in the underlying domain. We give a model-theoretic interpretation to these extended faceted ontologies and we provide mechanisms for inferring the valid conjunctions of terms. This inference service can be exploited for preventing errors during the indexing process and for deriving navigation trees that are suitable for browsing. The proposed scheme has several advantages by comparison to the hierarchical classification schemes that are currently used, namely: conceptual clarity: it is easier to understand, compactness: it takes less space, and scalability: the update operations can be formulated easier and be performed more efficiently.
  10. Reichmann, W.: Open Science zwischen sozialen Strukturen und Wissenskulturen : eine wissenschaftssoziologische Erweiterung (2017) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 3419) [ClassicSimilarity], result of:
          0.081282035 = score(doc=3419,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 3419, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=3419)
      0.14285715 = coord(1/7)
    
    Abstract
    Der vorliegende Beitrag plädiert für eine differenzierte Interpretation der Open-Science-Idee, nämlich sowohl als umfassendes strukturelles als auch als kulturelles Phänomen. In der öffentlichen Diskussion wird Open Science oftmals auf die strukturelle Öffnung des Publikationsmarktes für die Nachfrageseite reduziert. Dabei wird vernachlässigt, dass Wissenschaft auch aus darüberhinausgehenden Strukturen besteht, beispielsweise der Sozialstruktur wissenschaftlicher Gemeinden, bei denen Mechanismen der Schließung und Öffnung zu beobachten sind. Open Science sollte darüber hinaus als kulturelles Phänomen interpretiert werden. Unter Verwendung des Begriffs "Wissenskulturen" zeigt der Beitrag, dass sich Open Science in der wissenschaftlichen Praxis als prozesshaftes und heterogenes Phänomen darstellt und dass Offenheit für verschiedene Gruppen der wissenschaftlichen Gemeinschaft unterschiedliche Bedeutungen aufweist.
  11. Laaff, M.: Googles genialer Urahn (2011) 0.01
    0.011484614 = product of:
      0.040196147 = sum of:
        0.033867512 = weight(_text_:interpretation in 4610) [ClassicSimilarity], result of:
          0.033867512 = score(doc=4610,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.1582201 = fieldWeight in 4610, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4610)
        0.006328635 = product of:
          0.01265727 = sum of:
            0.01265727 = weight(_text_:22 in 4610) [ClassicSimilarity], result of:
              0.01265727 = score(doc=4610,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.09672529 = fieldWeight in 4610, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4610)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Content
    Der Traum vom dynamischen, ständig wachsenden Wissensnetz Auch, weil Otlet bereits darüber nachdachte, wie in seinem vernetzten Wissenskatalog Anmerkungen einfließen könnten, die Fehler korrigieren oder Widerspruch abbilden. Vor dieser Analogie warnt jedoch Charles van den Heuvel von der Königlichen Niederländischen Akademie der Künste und Wissenschaften: Seiner Interpretation zufolge schwebte Otlet ein System vor, in dem Wissen hierarchisch geordnet ist: Nur eine kleine Gruppe von Wissenschaftlern sollte an der Einordnung von Wissen arbeiten; Bearbeitungen und Anmerkungen sollten, anders etwa als bei der Wikipedia, nicht mit der Information verschmelzen, sondern sie lediglich ergänzen. Das Netz, das Otlet sich ausmalte, ging weit über das World Wide Web mit seiner Hypertext-Struktur hinaus. Otlet wollte nicht nur Informationen miteinander verbunden werden - die Links sollten noch zusätzlich mit Bedeutung aufgeladen werden. Viele Experten sind sich einig, dass diese Idee von Otlet viele Parallelen zu dem Konzept des "semantischen Netz" aufweist. Dessen Ziel ist es, die Bedeutung von Informationen für Rechner verwertbar zu machen - so dass Informationen von ihnen interpretiert werden und maschinell weiterverarbeitet werden können. Projekte, die sich an einer Verwirklichung des semantischen Netzes versuchen, könnten von einem Blick auf Otlets Konzepte profitieren, so van den Heuvel, von dessen Überlegungen zu Hierarchie und Zentralisierung in dieser Frage. Im Mundaneum in Mons arbeitet man derzeit daran, Otlets Arbeiten zu digitalisieren, um sie ins Netz zu stellen. Das dürfte zwar noch ziemlich lange dauern, warnt Archivar Gillen. Aber wenn es soweit ist, wird sich endlich Otlets Vision erfüllen: Seine Sammlung des Wissens wird der Welt zugänglich sein. Papierlos, für jeden abrufbar."
    Date
    24.10.2008 14:19:22
  12. Will, L.D.: UML model : as given in British Standard Draft for Development DD8723-5:2008 (2008) 0.01
    0.009677563 = product of:
      0.06774294 = sum of:
        0.06774294 = product of:
          0.13548587 = sum of:
            0.13548587 = weight(_text_:anwendung in 7636) [ClassicSimilarity], result of:
              0.13548587 = score(doc=7636,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.74887794 = fieldWeight in 7636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7636)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  13. Dextre Clarke, S.G.; Will, L.D.; Cochard, N.: ¬The BS8723 thesaurus data model and exchange format, and its relationship to SKOS (2008) 0.01
    0.009677563 = product of:
      0.06774294 = sum of:
        0.06774294 = product of:
          0.13548587 = sum of:
            0.13548587 = weight(_text_:anwendung in 6051) [ClassicSimilarity], result of:
              0.13548587 = score(doc=6051,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.74887794 = fieldWeight in 6051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6051)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  14. Will, L.D.: Publications on thesaurus construction and use : including some references to facet analysis, taxonomies, ontologies, topic maps and related issues (2005) 0.01
    0.009677563 = product of:
      0.06774294 = sum of:
        0.06774294 = product of:
          0.13548587 = sum of:
            0.13548587 = weight(_text_:anwendung in 3192) [ClassicSimilarity], result of:
              0.13548587 = score(doc=3192,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.74887794 = fieldWeight in 3192, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3192)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  15. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 4705) [ClassicSimilarity], result of:
          0.067735024 = score(doc=4705,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 4705, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4705)
      0.14285715 = coord(1/7)
    
    Abstract
    Companies, governmental agencies and scientists produce a large amount of quantitative (research) data, consisting of measurements ranging from e.g. the surface temperatures of an ocean to the viscosity of a sample of mayonnaise. Such measurements are stored in tables in e.g. spreadsheet files and research reports. To integrate and reuse such data, it is necessary to have a semantic description of the data. However, the notation used is often ambiguous, making automatic interpretation and conversion to RDF or other suitable format diffiult. For example, the table header cell "f(Hz)" refers to frequency measured in Hertz, but the symbol "f" can also refer to the unit farad or the quantities force or luminous flux. Current annotation tools for this task either work on less ambiguous data or perform a more limited task. We introduce new disambiguation strategies based on an ontology, which allows to improve performance on "sloppy" datasets not yet targeted by existing systems.
  16. Xiaoyue M.; Cahier, J.-P.: Iconic categorization with knowledge-based "icon systems" can improve collaborative KM (2011) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 4837) [ClassicSimilarity], result of:
          0.067735024 = score(doc=4837,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 4837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4837)
      0.14285715 = coord(1/7)
    
    Abstract
    Icon system could represent an efficient solution for collective iconic categorization of knowledge by providing graphical interpretation. Their pictorial characters assist visualizing the structure of text to become more understandable beyond vocabulary obstacle. In this paper we are proposing a Knowledge Engineering (KM) based iconic representation approach. We assume that these systematic icons improve collective knowledge management. Meanwhile, text (constructed under our knowledge management model - Hypertopic) helps to reduce the diversity of graphical understanding belonging to different users. This "position paper" also prepares to demonstrate our hypothesis by an "iconic social tagging" experiment which is to be accomplished in 2011 with UTT students. We describe the "socio semantic web" information portal involved in this project, and a part of the icons already designed for this experiment in Sustainability field. We have reviewed existing theoretical works on icons from various origins, which can be used to lay the foundation of robust "icons systems".
  17. Donahue, J.; Hendricks, L.A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description (2014) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 1873) [ClassicSimilarity], result of:
          0.067735024 = score(doc=1873,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 1873, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1873)
      0.14285715 = coord(1/7)
    
    Abstract
    Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
  18. Choi, I.: Visualizations of cross-cultural bibliographic classification : comparative studies of the Korean Decimal Classification and the Dewey Decimal Classification (2017) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 3869) [ClassicSimilarity], result of:
          0.067735024 = score(doc=3869,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 3869, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3869)
      0.14285715 = coord(1/7)
    
    Abstract
    The changes in KO systems induced by sociocultural influences may include those in both classificatory principles and cultural features. The proposed study will examine the Korean Decimal Classification (KDC)'s adaptation of the Dewey Decimal Classification (DDC) by comparing the two systems. This case manifests the sociocultural influences on KOSs in a cross-cultural context. Therefore, the study aims at an in-depth investigation of sociocultural influences by situating a KOS in a cross-cultural environment and examining the dynamics between two classification systems designed to organize information resources in two distinct sociocultural contexts. As a preceding stage of the comparison, the analysis was conducted on the changes that result from the meeting of different sociocultural feature in a descriptive method. The analysis aims to identify variations between the two schemes in comparison of the knowledge structures of the two classifications, in terms of the quantity of class numbers that represent concepts and their relationships in each of the individual main classes. The most effective analytic strategy to show the patterns of the comparison was visualizations of similarities and differences between the two systems. Increasing or decreasing tendencies in the class through various editions were analyzed. Comparing the compositions of the main classes and distributions of concepts in the KDC and DDC discloses the differences in their knowledge structures empirically. This phase of quantitative analysis and visualizing techniques generates empirical evidence leading to interpretation.
  19. Lee, W.-C.: Conflicts of semantic warrants in cataloging practices (2017) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 3871) [ClassicSimilarity], result of:
          0.067735024 = score(doc=3871,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 3871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3871)
      0.14285715 = coord(1/7)
    
    Abstract
    This study presents preliminary themes surfaced from an ongoing ethnographic study. The research question is: how and where do cultures influence the cataloging practices of using U.S. standards to catalog Chinese materials? The author applies warrant as a lens for evaluating knowledge representation systems, and extends the application from examining classificatory decisions to cataloging decisions. Semantic warrant as a conceptual tool allows us to recognize and name the various rationales behind cataloging decisions, grants us explanatory power, and the language to "visualize" and reflect on the conflicting priorities in cataloging practices. Through participatory observation, the author recorded the cataloging practices of two Chinese catalogers working on the same cataloging project. One of the catalogers is U.S. trained, and another cataloger is a professor of Library and Information Science from China, who is also a subject expert and a cataloger of Chinese special collections. The study shows how the catalogers describe Chinese special collections using many U.S. cataloging and classification standards but from different approaches. The author presents particular cases derived from the fieldwork, with an emphasis on the many layers presented by cultures, principles, standards, and practices of different scope, each of which may represent conflicting warrants. From this, it is made clear that the conflicts of warrants influence cataloging practice. We may view the conflicting warrants as an interpretation of the tension between different semantic warrants and the globalization and localization of cataloging standards.
  20. Gillitzer, B.: Yewno (2017) 0.01
    0.008423126 = product of:
      0.058961883 = sum of:
        0.058961883 = sum of:
          0.03871025 = weight(_text_:anwendung in 3447) [ClassicSimilarity], result of:
            0.03871025 = score(doc=3447,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.21396513 = fieldWeight in 3447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.03125 = fieldNorm(doc=3447)
          0.020251632 = weight(_text_:22 in 3447) [ClassicSimilarity], result of:
            0.020251632 = score(doc=3447,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.15476047 = fieldWeight in 3447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3447)
      0.14285715 = coord(1/7)
    
    Abstract
    "Die Bayerische Staatsbibliothek testet den semantischen "Discovery Service" Yewno als zusätzliche thematische Suchmaschine für digitale Volltexte. Der Service ist unter folgendem Link erreichbar: https://www.bsb-muenchen.de/recherche-und-service/suchen-und-finden/yewno/. Das Identifizieren von Themen, um die es in einem Text geht, basiert bei Yewno alleine auf Methoden der künstlichen Intelligenz und des maschinellen Lernens. Dabei werden sie nicht - wie bei klassischen Katalogsystemen - einem Text als Ganzem zugeordnet, sondern der jeweiligen Textstelle. Die Eingabe eines Suchwortes bzw. Themas, bei Yewno "Konzept" genannt, führt umgehend zu einer grafischen Darstellung eines semantischen Netzwerks relevanter Konzepte und ihrer inhaltlichen Zusammenhänge. So ist ein Navigieren über thematische Beziehungen bis hin zu den Fundstellen im Text möglich, die dann in sogenannten Snippets angezeigt werden. In der Test-Anwendung der Bayerischen Staatsbibliothek durchsucht Yewno aktuell 40 Millionen englischsprachige Dokumente aus Publikationen namhafter Wissenschaftsverlage wie Cambridge University Press, Oxford University Press, Wiley, Sage und Springer, sowie Dokumente, die im Open Access verfügbar sind. Nach der dreimonatigen Testphase werden zunächst die Rückmeldungen der Nutzer ausgewertet. Ob und wann dann der Schritt von der klassischen Suchmaschine zum semantischen "Discovery Service" kommt und welche Bedeutung Anwendungen wie Yewno in diesem Zusammenhang einnehmen werden, ist heute noch nicht abzusehen. Die Software Yewno wurde vom gleichnamigen Startup in Zusammenarbeit mit der Stanford University entwickelt, mit der auch die Bayerische Staatsbibliothek eng kooperiert. [Inetbib-Posting vom 22.02.2017].
    Date
    22. 2.2017 10:16:49

Authors

Years

Languages

  • d 124
  • e 105
  • el 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 110
  • i 12
  • r 8
  • m 7
  • x 5
  • b 3
  • s 3
  • n 2
  • p 1
  • More… Less…