Search (110 results, page 1 of 6)

  • × type_ss:"a"
  • × type_ss:"el"
  1. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.45
    0.4521981 = product of:
      0.7913466 = sum of:
        0.07913466 = product of:
          0.23740397 = sum of:
            0.23740397 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.23740397 = score(doc=230,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.23740397 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23740397 = score(doc=230,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.23740397 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23740397 = score(doc=230,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.23740397 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23740397 = score(doc=230,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.5714286 = coord(4/7)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  2. Priss, U.: Faceted knowledge representation (1999) 0.03
    0.032156922 = product of:
      0.112549216 = sum of:
        0.09482904 = weight(_text_:interpretation in 2654) [ClassicSimilarity], result of:
          0.09482904 = score(doc=2654,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.035440356 = score(doc=2654,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  3. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.03
    0.02526938 = product of:
      0.17688565 = sum of:
        0.17688565 = sum of:
          0.116130754 = weight(_text_:anwendung in 3895) [ClassicSimilarity], result of:
            0.116130754 = score(doc=3895,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.6418954 = fieldWeight in 3895, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.09375 = fieldNorm(doc=3895)
          0.06075489 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
            0.06075489 = score(doc=3895,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.46428138 = fieldWeight in 3895, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=3895)
      0.14285715 = coord(1/7)
    
    Date
    24. 8.2005 19:20:22
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  4. Prokop, M.: Hans Jonas and the phenomenological continuity of life and mind (2022) 0.01
    0.013684542 = product of:
      0.09579179 = sum of:
        0.09579179 = weight(_text_:interpretation in 1048) [ClassicSimilarity], result of:
          0.09579179 = score(doc=1048,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.44751403 = fieldWeight in 1048, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1048)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper offers a novel interpretation of Hans Jonas' analysis of metabolism, the centrepiece of Jonas' philosophy of organism, in relation to recent controversies regarding the phenomenological dimension of life-mind continuity as understood within 'autopoietic' enactivism (AE). Jonas' philosophy of organism chiefly inspired AE's development of what we might call 'the phenomenological life-mind continuity thesis' (PLMCT), the claim that certain phenomenological features of human experience are central to a proper scientific understanding of both life and mind, and as such central features of all living organisms. After discussing the understanding of PLMCT within AE, and recent criticisms thereof, I develop a reading of Jonas' analysis of metabolism, in light of previous commentators, which emphasizes its systematicity and transcendental flavour. The central thought is that, for Jonas, the attribution of certain phenomenological features is a necessary precondition for our understanding of the possibility of metabolism, rather than being derivable from metabolism itself. I argue that my interpretation strengthens Jonas' contribution to AE's justification for ascribing certain phenomenological features to life across the board. However, it also emphasises the need to complement Jonas' analysis with an explanatory account of organic identity in order to vindicate these phenomenological ascriptions in a scientific context.
  5. Sewing, S.: Bestandserhaltung und Archivierung : Koordinierung auf der Basis eines gemeinsamen Metadatenformates in den deutschen und österreichischen Bibliotheksverbünden (2021) 0.01
    0.01263469 = product of:
      0.088442825 = sum of:
        0.088442825 = sum of:
          0.058065377 = weight(_text_:anwendung in 266) [ClassicSimilarity], result of:
            0.058065377 = score(doc=266,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.3209477 = fieldWeight in 266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.046875 = fieldNorm(doc=266)
          0.030377446 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
            0.030377446 = score(doc=266,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.23214069 = fieldWeight in 266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=266)
      0.14285715 = coord(1/7)
    
    Abstract
    In den Handlungsempfehlungen der Koordinierungsstelle für die Erhaltung des schriftlichen Kulturguts (KEK) von 2015 (KEK-Handlungsempfehlungen) wird ein nationaler Standard bei der Dokumentation von Bestandserhaltung gefordert: "In den Bibliothekskatalogen sollten künftig für den verbundübergreifenden Abgleich Bestandserhaltungsmaßnahmen für die Bestände ab 1851 [.] in standardisierter Form dokumentiert und recherchierbar gemacht werden. Dies bedarf einer gemeinsamen Festlegung mit den Bibliotheksverbünden [.]." In den KEK-Handlungsempfehlungen werden auf der Basis einer im Jahr 2015 erfolgten Erhebung für Monografien fast neun Millionen Bände aus dem Zeitabschnitt 1851-1990 als Pflichtexemplare an Bundes- und Ländereinrichtungen angegeben, die akut vom Papierzerfall bedroht und als erste Stufe einer Gesamtstrategie zu entsäuern sind. Ein Ziel der KEK ist es, standardisierte und zertifizierte Verfahren zur Massenentsäuerung zu fördern. Im Metadatenformat sind zunächst fünf Verfahren der Massenentsäuerung in Form von kontrolliertem Vokabular dokumentiert: DEZ, Mg3/MBG, METE, MgO, MMMC[2]. Mit diesen Angaben, die gezielt selektiert werden können, ist mittel- und langfristig die Anwendung einzelner Verfahren der Massenentsäuerung abrufbar und statistisch auswertbar.
    Date
    22. 5.2021 12:43:05
  6. Tzitzikas, Y.; Spyratos, N.; Constantopoulos, P.; Analyti, A.: Extended faceted ontologies (2002) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 2280) [ClassicSimilarity], result of:
          0.081282035 = score(doc=2280,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 2280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=2280)
      0.14285715 = coord(1/7)
    
    Abstract
    A faceted ontology consists of a set of facets, where each facet consists of a predefined set of terms structured by a subsumption relation. We propose two extensions of faceted ontologies, which allow inferring conjunctions of terms that are valid in the underlying domain. We give a model-theoretic interpretation to these extended faceted ontologies and we provide mechanisms for inferring the valid conjunctions of terms. This inference service can be exploited for preventing errors during the indexing process and for deriving navigation trees that are suitable for browsing. The proposed scheme has several advantages by comparison to the hierarchical classification schemes that are currently used, namely: conceptual clarity: it is easier to understand, compactness: it takes less space, and scalability: the update operations can be formulated easier and be performed more efficiently.
  7. Reichmann, W.: Open Science zwischen sozialen Strukturen und Wissenskulturen : eine wissenschaftssoziologische Erweiterung (2017) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 3419) [ClassicSimilarity], result of:
          0.081282035 = score(doc=3419,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 3419, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=3419)
      0.14285715 = coord(1/7)
    
    Abstract
    Der vorliegende Beitrag plädiert für eine differenzierte Interpretation der Open-Science-Idee, nämlich sowohl als umfassendes strukturelles als auch als kulturelles Phänomen. In der öffentlichen Diskussion wird Open Science oftmals auf die strukturelle Öffnung des Publikationsmarktes für die Nachfrageseite reduziert. Dabei wird vernachlässigt, dass Wissenschaft auch aus darüberhinausgehenden Strukturen besteht, beispielsweise der Sozialstruktur wissenschaftlicher Gemeinden, bei denen Mechanismen der Schließung und Öffnung zu beobachten sind. Open Science sollte darüber hinaus als kulturelles Phänomen interpretiert werden. Unter Verwendung des Begriffs "Wissenskulturen" zeigt der Beitrag, dass sich Open Science in der wissenschaftlichen Praxis als prozesshaftes und heterogenes Phänomen darstellt und dass Offenheit für verschiedene Gruppen der wissenschaftlichen Gemeinschaft unterschiedliche Bedeutungen aufweist.
  8. Will, L.D.: UML model : as given in British Standard Draft for Development DD8723-5:2008 (2008) 0.01
    0.009677563 = product of:
      0.06774294 = sum of:
        0.06774294 = product of:
          0.13548587 = sum of:
            0.13548587 = weight(_text_:anwendung in 7636) [ClassicSimilarity], result of:
              0.13548587 = score(doc=7636,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.74887794 = fieldWeight in 7636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7636)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  9. Dextre Clarke, S.G.; Will, L.D.; Cochard, N.: ¬The BS8723 thesaurus data model and exchange format, and its relationship to SKOS (2008) 0.01
    0.009677563 = product of:
      0.06774294 = sum of:
        0.06774294 = product of:
          0.13548587 = sum of:
            0.13548587 = weight(_text_:anwendung in 6051) [ClassicSimilarity], result of:
              0.13548587 = score(doc=6051,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.74887794 = fieldWeight in 6051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6051)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  10. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 4705) [ClassicSimilarity], result of:
          0.067735024 = score(doc=4705,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 4705, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4705)
      0.14285715 = coord(1/7)
    
    Abstract
    Companies, governmental agencies and scientists produce a large amount of quantitative (research) data, consisting of measurements ranging from e.g. the surface temperatures of an ocean to the viscosity of a sample of mayonnaise. Such measurements are stored in tables in e.g. spreadsheet files and research reports. To integrate and reuse such data, it is necessary to have a semantic description of the data. However, the notation used is often ambiguous, making automatic interpretation and conversion to RDF or other suitable format diffiult. For example, the table header cell "f(Hz)" refers to frequency measured in Hertz, but the symbol "f" can also refer to the unit farad or the quantities force or luminous flux. Current annotation tools for this task either work on less ambiguous data or perform a more limited task. We introduce new disambiguation strategies based on an ontology, which allows to improve performance on "sloppy" datasets not yet targeted by existing systems.
  11. Xiaoyue M.; Cahier, J.-P.: Iconic categorization with knowledge-based "icon systems" can improve collaborative KM (2011) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 4837) [ClassicSimilarity], result of:
          0.067735024 = score(doc=4837,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 4837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4837)
      0.14285715 = coord(1/7)
    
    Abstract
    Icon system could represent an efficient solution for collective iconic categorization of knowledge by providing graphical interpretation. Their pictorial characters assist visualizing the structure of text to become more understandable beyond vocabulary obstacle. In this paper we are proposing a Knowledge Engineering (KM) based iconic representation approach. We assume that these systematic icons improve collective knowledge management. Meanwhile, text (constructed under our knowledge management model - Hypertopic) helps to reduce the diversity of graphical understanding belonging to different users. This "position paper" also prepares to demonstrate our hypothesis by an "iconic social tagging" experiment which is to be accomplished in 2011 with UTT students. We describe the "socio semantic web" information portal involved in this project, and a part of the icons already designed for this experiment in Sustainability field. We have reviewed existing theoretical works on icons from various origins, which can be used to lay the foundation of robust "icons systems".
  12. Donahue, J.; Hendricks, L.A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description (2014) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 1873) [ClassicSimilarity], result of:
          0.067735024 = score(doc=1873,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 1873, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1873)
      0.14285715 = coord(1/7)
    
    Abstract
    Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
  13. Choi, I.: Visualizations of cross-cultural bibliographic classification : comparative studies of the Korean Decimal Classification and the Dewey Decimal Classification (2017) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 3869) [ClassicSimilarity], result of:
          0.067735024 = score(doc=3869,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 3869, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3869)
      0.14285715 = coord(1/7)
    
    Abstract
    The changes in KO systems induced by sociocultural influences may include those in both classificatory principles and cultural features. The proposed study will examine the Korean Decimal Classification (KDC)'s adaptation of the Dewey Decimal Classification (DDC) by comparing the two systems. This case manifests the sociocultural influences on KOSs in a cross-cultural context. Therefore, the study aims at an in-depth investigation of sociocultural influences by situating a KOS in a cross-cultural environment and examining the dynamics between two classification systems designed to organize information resources in two distinct sociocultural contexts. As a preceding stage of the comparison, the analysis was conducted on the changes that result from the meeting of different sociocultural feature in a descriptive method. The analysis aims to identify variations between the two schemes in comparison of the knowledge structures of the two classifications, in terms of the quantity of class numbers that represent concepts and their relationships in each of the individual main classes. The most effective analytic strategy to show the patterns of the comparison was visualizations of similarities and differences between the two systems. Increasing or decreasing tendencies in the class through various editions were analyzed. Comparing the compositions of the main classes and distributions of concepts in the KDC and DDC discloses the differences in their knowledge structures empirically. This phase of quantitative analysis and visualizing techniques generates empirical evidence leading to interpretation.
  14. Lee, W.-C.: Conflicts of semantic warrants in cataloging practices (2017) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 3871) [ClassicSimilarity], result of:
          0.067735024 = score(doc=3871,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 3871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3871)
      0.14285715 = coord(1/7)
    
    Abstract
    This study presents preliminary themes surfaced from an ongoing ethnographic study. The research question is: how and where do cultures influence the cataloging practices of using U.S. standards to catalog Chinese materials? The author applies warrant as a lens for evaluating knowledge representation systems, and extends the application from examining classificatory decisions to cataloging decisions. Semantic warrant as a conceptual tool allows us to recognize and name the various rationales behind cataloging decisions, grants us explanatory power, and the language to "visualize" and reflect on the conflicting priorities in cataloging practices. Through participatory observation, the author recorded the cataloging practices of two Chinese catalogers working on the same cataloging project. One of the catalogers is U.S. trained, and another cataloger is a professor of Library and Information Science from China, who is also a subject expert and a cataloger of Chinese special collections. The study shows how the catalogers describe Chinese special collections using many U.S. cataloging and classification standards but from different approaches. The author presents particular cases derived from the fieldwork, with an emphasis on the many layers presented by cultures, principles, standards, and practices of different scope, each of which may represent conflicting warrants. From this, it is made clear that the conflicts of warrants influence cataloging practice. We may view the conflicting warrants as an interpretation of the tension between different semantic warrants and the globalization and localization of cataloging standards.
  15. Trant, J.; Bearman, D.: Social terminology enhancement through vernacular engagement : exploring collaborative annotation to encourage interaction with museum collections (2005) 0.01
    0.0077411463 = product of:
      0.05418802 = sum of:
        0.05418802 = weight(_text_:interpretation in 1185) [ClassicSimilarity], result of:
          0.05418802 = score(doc=1185,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.25315216 = fieldWeight in 1185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=1185)
      0.14285715 = coord(1/7)
    
    Abstract
    From their earliest encounters with the Web, museums have seen an opportunity to move beyond uni-directional communication into an environment that engages their users and reflects a multiplicity of perspectives. Shedding the "Unassailable Voice" (Walsh 1997) in favor of many "Points of View" (Sledge 1995) has challenged traditional museum approaches to the creation and delivery of content. Novel approaches are required in order to develop and sustain user engagement (Durbin 2004). New models of exhibit creation that democratize the curatorial functions of object selection and interpretation offer one way of opening up the museum (Coldicutt and Streten 2005). Another is to use the museum as a forum and focus for community story-telling (Howard, Pratty et al. 2005). Unfortunately, museum collections remain relatively inaccessible even when 'made available' through searchable on-line databases. Museum documentation seldom satisfies the on-line access needs of the broad public, both because it is written using professional terminology and because it may not address what is important to - or remembered by - the museum visitor. For example, an exhibition now on-line at The Metropolitan Museum of Art acknowledges "Coco" Chanel only in the brief, textual introduction (The Metropolitan Museum of Art 2005a). All of the images of her delightful fashion designs are attributed to "Gabrielle Chanel" (The Metropolitan Museum of Art 2005a). Interfaces that organize collections along axes of time or place - such of that of the Timeline of Art History (The Metropolitan Museum of Art 2005e) - often fail to match users' world-views, despite the care that went into their structuring or their significant pedagogical utility. Critically, as professionals working with art museums we realize that when cataloguers and curators describe works of art, they usually do not include the "subject" of the image itself. Simply put, we rarely answer the question "What is it a picture of?" Unfortunately, visitors will often remember a work based on its visual characteristics, only to find that Web-based searches for any of the things they recall do not produce results.
  16. Jörs, B.: ¬Die Informationswissenschaft ist tot, es lebe die Datenwissenschaft (2019) 0.01
    0.0077411463 = product of:
      0.05418802 = sum of:
        0.05418802 = weight(_text_:interpretation in 5879) [ClassicSimilarity], result of:
          0.05418802 = score(doc=5879,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.25315216 = fieldWeight in 5879, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=5879)
      0.14285715 = coord(1/7)
    
    Abstract
    "Haben die "Daten" bzw. die "Datenwissenschaft" (Data Science) die "Information" bzw. die Informationswissenschaft obsolet gemacht? Hat die "Data Science" mit ihren KI-gestützten Instrumenten die ökonomische und technische Herrschaft über die "Daten" und damit auch über die "Informationen" und das "Wissen" übernommen? Die meist in der Informatik/Mathematik beheimatete "Data Science" hat die wissenschaftliche Führungsrolle übernommen, "Daten" in "Informationen" und "Wissen" zu transferieren." "Der Wandel von analoger zu digitaler Informationsverarbeitung hat die Informationswissenschaft im Grunde obsolet gemacht. Heute steht die Befassung mit der Kategorie "Daten" und deren kausaler Zusammenhang mit der "Wissens"-Generierung (Erkennung von Mustern und Zusammenhängen, Prognosefähigkeit usw.) und neuronalen Verarbeitung und Speicherung im Zentrum der Forschung." "Wäre die Wissenstreppe nach North auch für die Informationswissenschaft gültig, würde sie erkennen, dass die Befassung mit "Daten" und die durch Vorwissen ermöglichte Interpretation von "Daten" erst die Voraussetzungen schaffen, "Informationen" als "kontextualisierte Daten" zu verstehen, um "Informationen" strukturieren, darstellen, erzeugen und suchen zu können."
  17. Geuter, J.: Nein, Ethik kann man nicht programmieren (2018) 0.01
    0.0068430714 = product of:
      0.047901496 = sum of:
        0.047901496 = product of:
          0.09580299 = sum of:
            0.09580299 = weight(_text_:anwendung in 4428) [ClassicSimilarity], result of:
              0.09580299 = score(doc=4428,freq=4.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.5295367 = fieldWeight in 4428, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4428)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Content
    Irrtum 1: Die Anwendung von Ethik kann man in Computerprogrammen formulieren - Irrtum 2: Daten erzeugen Wahrheit, und falls nicht, braucht man einfach mehr Daten - Irrtum 3: In 20 Jahren gibt es eine künstliche Intelligenz, die genauso gut wie oder besser ist als menschliche - Irrtum 4: Diskriminierung durch Algorithmen ist schlimmer als Diskriminierung durch Menschen - Irrtum 5: Gesetze und Verträge können in Code ausgedrückt werden, um ihre Anwendung zu standardisieren - Irrtum 6: Digitale Freiheit drückt sich in der vollständigen Autonomie des Individuums aus.
  18. Baierer, K.; Zumstein, P.: Verbesserung der OCR in digitalen Sammlungen von Bibliotheken (2016) 0.01
    0.005530036 = product of:
      0.03871025 = sum of:
        0.03871025 = product of:
          0.0774205 = sum of:
            0.0774205 = weight(_text_:anwendung in 2818) [ClassicSimilarity], result of:
              0.0774205 = score(doc=2818,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.42793027 = fieldWeight in 2818, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2818)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Möglichkeiten zur Verbesserung der automatischen Texterkennung (OCR) in digitalen Sammlungen insbesondere durch computerlinguistische Methoden werden beschrieben und bisherige PostOCR-Verfahren analysiert. Im Gegensatz zu diesen Möglichkeiten aus der Forschung oder aus einzelnen Projekten unterscheidet sich die momentane Anwendung von OCR in der Bibliothekspraxis wesentlich und nutzt das Potential nur teilweise aus.
  19. Winterhalter, C.: Licence to mine : ein Überblick über Rahmenbedingungen von Text and Data Mining und den aktuellen Stand der Diskussion (2016) 0.01
    0.005530036 = product of:
      0.03871025 = sum of:
        0.03871025 = product of:
          0.0774205 = sum of:
            0.0774205 = weight(_text_:anwendung in 673) [ClassicSimilarity], result of:
              0.0774205 = score(doc=673,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.42793027 = fieldWeight in 673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=673)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Der Artikel gibt einen Überblick über die Möglichkeiten der Anwendung von Text and Data Mining (TDM) und ähnlichen Verfahren auf der Grundlage bestehender Regelungen in Lizenzverträgen zu kostenpflichtigen elektronischen Ressourcen, die Debatte über zusätzliche Lizenzen für TDM am Beispiel von Elseviers TDM Policy und den Stand der Diskussion über die Einführung von Schrankenregelungen im Urheberrecht für TDM zu nichtkommerziellen wissenschaftlichen Zwecken.
  20. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.01
    0.0051143095 = product of:
      0.035800166 = sum of:
        0.035800166 = product of:
          0.07160033 = sum of:
            0.07160033 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.07160033 = score(doc=3925,freq=4.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2006 15:22:28

Years

Languages

  • d 61
  • e 45
  • a 1
  • More… Less…