Search (4238 results, page 1 of 212)

  1. Forsyth, D.A.: Finding pictures of objects in large collections of images (1997) 0.24
    0.23604803 = product of:
      0.35407203 = sum of:
        0.1261333 = weight(_text_:objects in 763) [ClassicSimilarity], result of:
          0.1261333 = score(doc=763,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.49828792 = fieldWeight in 763, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=763)
        0.22793876 = sum of:
          0.17318656 = weight(_text_:fusion in 763) [ClassicSimilarity], result of:
            0.17318656 = score(doc=763,freq=2.0), product of:
              0.35273543 = queryWeight, product of:
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.047625583 = queryNorm
              0.49098146 = fieldWeight in 763, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.046875 = fieldNorm(doc=763)
          0.054752205 = weight(_text_:22 in 763) [ClassicSimilarity], result of:
            0.054752205 = score(doc=763,freq=4.0), product of:
              0.16677667 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047625583 = queryNorm
              0.32829654 = fieldWeight in 763, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=763)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes an approach to the problem of object recognition structured around a sequence of increasingly specialised grouping activities that assemble coherent regions of images that can be sown to satisfy increasingly stringent conditions. The recognition system is designed to cope with: colour and texture; the ability to deal with general objects in uncontrolled configurations and contexts; and a satisfactory notion of classification. These properties are illustrated using 3 case studies, demonstrating: the use of descriptions that fuse colour and spatial properties; the use of fusion of texture and geometric properties to describes trees; and the use of a recognition system to determine accurately whether an image contains people and animals
    Date
    22. 9.1997 19:16:05
    3. 1.1999 12:21:22
  2. Huibers, T.W.C.; Bruza, P.D.: Situations, a general framework for studying information retrieval (1996) 0.20
    0.20072794 = product of:
      0.3010919 = sum of:
        0.0891897 = weight(_text_:objects in 6963) [ClassicSimilarity], result of:
          0.0891897 = score(doc=6963,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.35234275 = fieldWeight in 6963, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=6963)
        0.2119022 = sum of:
          0.17318656 = weight(_text_:fusion in 6963) [ClassicSimilarity], result of:
            0.17318656 = score(doc=6963,freq=2.0), product of:
              0.35273543 = queryWeight, product of:
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.047625583 = queryNorm
              0.49098146 = fieldWeight in 6963, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.046875 = fieldNorm(doc=6963)
          0.038715653 = weight(_text_:22 in 6963) [ClassicSimilarity], result of:
            0.038715653 = score(doc=6963,freq=2.0), product of:
              0.16677667 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047625583 = queryNorm
              0.23214069 = fieldWeight in 6963, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6963)
      0.6666667 = coord(2/3)
    
    Abstract
    Presents a framework for the theoretical comparison of information retrieval models based on how the models decide aboutness. The framework is based on concepts emerging from the field of situation theory. So called infons and profons represent elementary information carriers which can be manipulated by unions and fusion operators. These operators allow relationships between information carriers to be established. Sets of infons form so called situations which are used to model the information born by objects such as documents. Demonstrates how an arbitrary information retrieval model can be mapped down into the framework with special functions defined for this purpose depending on the model at hand. 2 examples are given based on the Boolean retrieval and coordination level matching models. Starting from an axiomatization of aboutness, retrieval models can be compared according to which axioms they are governed by
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  3. Schumann, A.: Bereit für XHTML (2000) 0.20
    0.19531444 = product of:
      0.29297164 = sum of:
        0.14864951 = weight(_text_:objects in 2297) [ClassicSimilarity], result of:
          0.14864951 = score(doc=2297,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.58723795 = fieldWeight in 2297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.078125 = fieldNorm(doc=2297)
        0.14432213 = product of:
          0.28864425 = sum of:
            0.28864425 = weight(_text_:fusion in 2297) [ClassicSimilarity], result of:
              0.28864425 = score(doc=2297,freq=2.0), product of:
                0.35273543 = queryWeight, product of:
                  7.406428 = idf(docFreq=72, maxDocs=44218)
                  0.047625583 = queryNorm
                0.8183024 = fieldWeight in 2297, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.406428 = idf(docFreq=72, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2297)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Object
    Net Objects Fusion
  4. Kaytoue, M.; Kuznetsov, S.O.; Assaghir, Z.; Napoli, A.: Embedding tolerance relations in concept lattices : an application in information fusion (2010) 0.15
    0.15385695 = product of:
      0.23078541 = sum of:
        0.12873426 = weight(_text_:objects in 4843) [ClassicSimilarity], result of:
          0.12873426 = score(doc=4843,freq=6.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.508563 = fieldWeight in 4843, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4843)
        0.102051154 = product of:
          0.20410231 = sum of:
            0.20410231 = weight(_text_:fusion in 4843) [ClassicSimilarity], result of:
              0.20410231 = score(doc=4843,freq=4.0), product of:
                0.35273543 = queryWeight, product of:
                  7.406428 = idf(docFreq=72, maxDocs=44218)
                  0.047625583 = queryNorm
                0.57862717 = fieldWeight in 4843, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.406428 = idf(docFreq=72, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4843)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Formal Concept Analysis (FCA) is a well founded mathematical framework used for conceptual classication and knowledge management. Given a binary table describing a relation between objects and attributes, FCA consists in building a set of concepts organized by a subsumption relation within a concept lattice. Accordingly, FCA requires to transform complex data, e.g. numbers, intervals, graphs, into binary data leading to loss of information and poor interpretability of object classes. In this paper, we propose a pre-processing method producing binary data from complex data taking advantage of similarity between objects. As a result, the concept lattice is composed of classes being maximal sets of pairwise similar objects. This method is based on FCA and on a formalization of similarity as a tolerance relation (reexive and symmetric). It applies to complex object descriptions and especially here to interval data. Moreover, it can be applied to any kind of structured data for which a similarity can be dened (sequences, graphs, etc.). Finally, an application highlights that the resulting concept lattice plays an important role in information fusion problem, as illustrated with a real-world example in agronomy.
  5. Dick, S.J.: Astronomy's Three Kingdom System : a comprehensive classification system of celestial objects (2019) 0.15
    0.15379563 = product of:
      0.23069343 = sum of:
        0.2081093 = weight(_text_:objects in 5455) [ClassicSimilarity], result of:
          0.2081093 = score(doc=5455,freq=8.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.82213306 = fieldWeight in 5455, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5455)
        0.022584131 = product of:
          0.045168262 = sum of:
            0.045168262 = weight(_text_:22 in 5455) [ClassicSimilarity], result of:
              0.045168262 = score(doc=5455,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.2708308 = fieldWeight in 5455, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5455)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Although classification has been an important aspect of astronomy since stellar spectroscopy in the late nineteenth century, to date no comprehensive classification system has existed for all classes of objects in the universe. Here we present such a system, and lay out its foundational definitions and principles. The system consists of the "Three Kingdoms" of planets, stars and galaxies, eighteen families, and eighty-two classes of objects. Gravitation is the defining organizing principle for the families and classes, and the physical nature of the objects is the defining characteristic of the classes. The system should prove useful for both scientific and pedagogical purposes.
    Date
    21.11.2019 18:46:22
  6. Gladis, R.: Datenlimit? : Nein danke! (1998) 0.14
    0.14126813 = product of:
      0.4238044 = sum of:
        0.4238044 = sum of:
          0.3463731 = weight(_text_:fusion in 2011) [ClassicSimilarity], result of:
            0.3463731 = score(doc=2011,freq=2.0), product of:
              0.35273543 = queryWeight, product of:
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.047625583 = queryNorm
              0.9819629 = fieldWeight in 2011, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.09375 = fieldNorm(doc=2011)
          0.077431306 = weight(_text_:22 in 2011) [ClassicSimilarity], result of:
            0.077431306 = score(doc=2011,freq=2.0), product of:
              0.16677667 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047625583 = queryNorm
              0.46428138 = fieldWeight in 2011, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=2011)
      0.33333334 = coord(1/3)
    
    Date
    22. 8.1998 10:40:17
    Object
    Cold Fusion
  7. Proffitt, M.: Pulling it all together : use of METS in RLG cultural materials service (2004) 0.13
    0.12932545 = product of:
      0.19398816 = sum of:
        0.16817772 = weight(_text_:objects in 767) [ClassicSimilarity], result of:
          0.16817772 = score(doc=767,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.6643839 = fieldWeight in 767, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0625 = fieldNorm(doc=767)
        0.025810435 = product of:
          0.05162087 = sum of:
            0.05162087 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
              0.05162087 = score(doc=767,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.30952093 = fieldWeight in 767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=767)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    RLG has used METS for a particular application, that is as a wrapper for structural metadata. When RLG cultural materials was launched, there was no single way to deal with "complex digital objects". METS provides a standard means of encoding metadata regarding the digital objects represented in RCM, and METS has now been fully integrated into the workflow for this service.
    Source
    Library hi tech. 22(2004) no.1, S.65-68
  8. Johnson, E.H.: Using IODyne : Illustrations and examples (1998) 0.13
    0.12932545 = product of:
      0.19398816 = sum of:
        0.16817772 = weight(_text_:objects in 2341) [ClassicSimilarity], result of:
          0.16817772 = score(doc=2341,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.6643839 = fieldWeight in 2341, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0625 = fieldNorm(doc=2341)
        0.025810435 = product of:
          0.05162087 = sum of:
            0.05162087 = weight(_text_:22 in 2341) [ClassicSimilarity], result of:
              0.05162087 = score(doc=2341,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.30952093 = fieldWeight in 2341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2341)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    IODyone is an Internet client program that allows one to retriev information from servers by dynamically combining information objects. Information objects are abstract representations of bibliographic data, typically titles (or title keywords), author names, subject and classification identifiers, and full-text search terms
    Date
    22. 9.1997 19:16:05
  9. Srinivasan, R.; Boast, R.; Becvar, K.M.; Furner, J.: Blobgects : digital museum catalogs and diverse user communities (2009) 0.12
    0.121551156 = product of:
      0.18232673 = sum of:
        0.16619521 = weight(_text_:objects in 2754) [ClassicSimilarity], result of:
          0.16619521 = score(doc=2754,freq=10.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.656552 = fieldWeight in 2754, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2754)
        0.016131522 = product of:
          0.032263044 = sum of:
            0.032263044 = weight(_text_:22 in 2754) [ClassicSimilarity], result of:
              0.032263044 = score(doc=2754,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.19345059 = fieldWeight in 2754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2754)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article presents an exploratory study of Blobgects, an experimental interface for an online museum catalog that enables social tagging and blogging activity around a set of cultural heritage objects held by a preeminent museum of anthropology and archaeology. This study attempts to understand not just whether social tagging and commenting about these objects is useful but rather whose tags and voices matter in presenting different expert perspectives around digital museum objects. Based on an empirical comparison between two different user groups (Canadian Inuit high-school students and museum studies students in the United States), we found that merely adding the ability to tag and comment to the museum's catalog does not sufficiently allow users to learn about or engage with the objects represented by catalog entries. Rather, the specialist language of the catalog provides too little contextualization for users to enter into the sort of dialog that proponents of Web 2.0 technologies promise. Overall, we propose a more nuanced application of Web 2.0 technologies within museums - one which provides a contextual basis that gives users a starting point for engagement and permits users to make sense of objects in relation to their own needs, uses, and understandings.
    Date
    22. 3.2009 18:52:32
  10. Holetschek, J. et al.: Natural history in Europeana : accessing scientific collection objects via LOD (2016) 0.12
    0.120608374 = product of:
      0.18091255 = sum of:
        0.14864951 = weight(_text_:objects in 3277) [ClassicSimilarity], result of:
          0.14864951 = score(doc=3277,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.58723795 = fieldWeight in 3277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.078125 = fieldNorm(doc=3277)
        0.032263044 = product of:
          0.06452609 = sum of:
            0.06452609 = weight(_text_:22 in 3277) [ClassicSimilarity], result of:
              0.06452609 = score(doc=3277,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.38690117 = fieldWeight in 3277, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3277)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  11. Ingwersen, P.: Cognitive perspectives of information retrieval interaction : elements of a cognitive IR theory (1996) 0.12
    0.11818143 = product of:
      0.17727214 = sum of:
        0.10511108 = weight(_text_:objects in 3616) [ClassicSimilarity], result of:
          0.10511108 = score(doc=3616,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.41523993 = fieldWeight in 3616, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3616)
        0.07216106 = product of:
          0.14432213 = sum of:
            0.14432213 = weight(_text_:fusion in 3616) [ClassicSimilarity], result of:
              0.14432213 = score(doc=3616,freq=2.0), product of:
                0.35273543 = queryWeight, product of:
                  7.406428 = idf(docFreq=72, maxDocs=44218)
                  0.047625583 = queryNorm
                0.4091512 = fieldWeight in 3616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.406428 = idf(docFreq=72, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3616)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The objective of this paper is to amalgamate theories of text retrieval from various research traditions into a cognitive theory for information retrieval interaction. Set in a cognitive framework, the paper outlines the concept of polyrepresentation applied to both the user's cognitive space and the information space of IR systems. The concept seeks to represent the current user's information need, problem state, and domain work task or interest in a structure of causality. Further, it implies that we should apply different methods of representation and a variety of IR techniques of different cognitive and functional origin simultaneously to each semantic full-text entity in the information space. The cognitive differences imply that by applying cognitive overlaps of information objects, originating from different interprestations of such objects through time and by type, the degree of uncertainty inherent in IR is decreased. ... The lack of consistency among authors, indexers, evaluators or users is of an identical cognitive nature. It is unavoidable, and indeed favourable to IR. In particular, for full-text retrieval, alternative semantic entities, including Salton 'et al.'s' 'passage retrieval', are proposed to replace the traditional document record as the basic retrieval entity. These empirically observed phenomena of inconsistency and of semantic entities and values associated with data interpretation support strongly a cognitive approach to IR and the logical use of olypresentation, cognitive overlaps, and both data fusion and data diffusion
  12. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.12
    0.11772345 = product of:
      0.35317034 = sum of:
        0.35317034 = sum of:
          0.28864425 = weight(_text_:fusion in 3107) [ClassicSimilarity], result of:
            0.28864425 = score(doc=3107,freq=2.0), product of:
              0.35273543 = queryWeight, product of:
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.047625583 = queryNorm
              0.8183024 = fieldWeight in 3107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.078125 = fieldNorm(doc=3107)
          0.06452609 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
            0.06452609 = score(doc=3107,freq=2.0), product of:
              0.16677667 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047625583 = queryNorm
              0.38690117 = fieldWeight in 3107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3107)
      0.33333334 = coord(1/3)
    
    Date
    27. 2.1999 20:59:22
  13. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.12
    0.117458045 = product of:
      0.35237414 = sum of:
        0.35237414 = sum of:
          0.3265637 = weight(_text_:fusion in 2752) [ClassicSimilarity], result of:
            0.3265637 = score(doc=2752,freq=16.0), product of:
              0.35273543 = queryWeight, product of:
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.047625583 = queryNorm
              0.9258035 = fieldWeight in 2752, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.03125 = fieldNorm(doc=2752)
          0.025810435 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
            0.025810435 = score(doc=2752,freq=2.0), product of:
              0.16677667 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047625583 = queryNorm
              0.15476047 = fieldWeight in 2752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2752)
      0.33333334 = coord(1/3)
    
    Abstract
    We report data fusion experiments carried out on the four best-performing retrieval models from TREC 5. Three were conceptually/algorithmically very different from one another; one was algorithmically similar to one of the former. The objective of the test was to observe the performance of the 11 logical data fusion combinations compared to the performance of the four individual models and their intermediate fusions when following the principle of polyrepresentation. This principle is based on cognitive IR perspective (Ingwersen & Järvelin, 2005) and implies that each retrieval model is regarded as a representation of a unique interpretation of information retrieval (IR). It predicts that only fusions of very different, but equally good, IR models may outperform each constituent as well as their intermediate fusions. Two kinds of experiments were carried out. One tested restricted fusions, which entails that only the inner disjoint overlap documents between fused models are ranked. The second set of experiments was based on traditional data fusion methods. The experiments involved the 30 TREC 5 topics that contain more than 44 relevant documents. In all tests, the Borda and CombSUM scoring methods were used. Performance was measured by precision and recall, with document cutoff values (DCVs) at 100 and 15 documents, respectively. Results show that restricted fusions made of two, three, or four cognitively/algorithmically very different retrieval models perform significantly better than do the individual models at DCV100. At DCV15, however, the results of polyrepresentative fusion were less predictable. The traditional fusion method based on polyrepresentation principles demonstrates a clear picture of performance at both DCV levels and verifies the polyrepresentation predictions for data fusion in IR. Data fusion improves retrieval performance over their constituent IR models only if the models all are quite conceptually/algorithmically dissimilar and equally and well performing, in that order of importance.
    Date
    22. 3.2009 18:48:28
  14. Falquet, G.; Guyot, J.; Nerima, L.: Languages and tools to specify hypertext views on databases (1999) 0.12
    0.115892634 = product of:
      0.17383894 = sum of:
        0.15448111 = weight(_text_:objects in 3968) [ClassicSimilarity], result of:
          0.15448111 = score(doc=3968,freq=6.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.6102756 = fieldWeight in 3968, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=3968)
        0.019357827 = product of:
          0.038715653 = sum of:
            0.038715653 = weight(_text_:22 in 3968) [ClassicSimilarity], result of:
              0.038715653 = score(doc=3968,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.23214069 = fieldWeight in 3968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3968)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present a declarative language for the construction of hypertext views on databases. The language is based on an object-oriented data model and a simple hypertext model with reference and inclusion links. A hypertext view specification consists in a collection of parameterized node schemes which specify how to construct node and links instances from the database contents. We show how this language can express different issues in hypertext view design. These include: the direct mapping of objects to nodes; the construction of complex nodes based on sets of objects; the representation of polymorphic sets of objects; and the representation of tree and graph structures. We have defined sublanguages corresponding to particular database models (relational, semantic, object-oriented) and implemented tools to generate Web views for these database models
    Date
    21.10.2000 15:01:22
  15. Yee, M.M.: What is a work? : part 1: the user and the objects of the catalog (1994) 0.11
    0.11315976 = product of:
      0.16973963 = sum of:
        0.14715551 = weight(_text_:objects in 735) [ClassicSimilarity], result of:
          0.14715551 = score(doc=735,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.5813359 = fieldWeight in 735, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=735)
        0.022584131 = product of:
          0.045168262 = sum of:
            0.045168262 = weight(_text_:22 in 735) [ClassicSimilarity], result of:
              0.045168262 = score(doc=735,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.2708308 = fieldWeight in 735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=735)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Part 1 of a series of articles, exploring the concept of 'the work' in cataloguing practice, which attempts to construct a definition of the term based on AACR theory and practice. The study begins with a consideration of the objects of the catalogue, their history and the evidence that bears on the question of the degree to which the user needs access to the work, as opposed to a particular edition of the work
    Footnote
    Vgl. auch: Pt.2: Cataloging and classification quarterly. 19(1994) no.2, S.5-22; Pt.3: Cataloging and classification quarterly. 20(1995) no.1, S.25-46; Pt.4: Cataloging and classification quarterly. 20(1995) no.2, S.3-24
  16. Benoit, G.; Hussey, L.: Repurposing digital objects : case studies across the publishing industry (2011) 0.11
    0.11315976 = product of:
      0.16973963 = sum of:
        0.14715551 = weight(_text_:objects in 4198) [ClassicSimilarity], result of:
          0.14715551 = score(doc=4198,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.5813359 = fieldWeight in 4198, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4198)
        0.022584131 = product of:
          0.045168262 = sum of:
            0.045168262 = weight(_text_:22 in 4198) [ClassicSimilarity], result of:
              0.045168262 = score(doc=4198,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.2708308 = fieldWeight in 4198, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4198)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Large, data-rich organizations have tremendously large collections of digital objects to be "repurposed," to respond quickly and economically to publishing, marketing, and information needs. Some management typically assume that a content management system, or some other technique such as OWL and RDF, will automatically address the workflow and technical issues associated with this reuse. Four case studies show that the sources of some roadblocks to agile repurposing are as much managerial and organizational as they are technical in nature. The review concludes with suggestions on how digital object repurposing can be integrated given these organizations' structures.
    Date
    22. 1.2011 14:23:07
  17. Degez, D.: Compatibilité des langages d'indexation mariage, cohabitation ou fusion? : Quelques examples concrèts (1998) 0.11
    0.11030382 = product of:
      0.33091146 = sum of:
        0.33091146 = sum of:
          0.2857432 = weight(_text_:fusion in 2245) [ClassicSimilarity], result of:
            0.2857432 = score(doc=2245,freq=4.0), product of:
              0.35273543 = queryWeight, product of:
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.047625583 = queryNorm
              0.810078 = fieldWeight in 2245, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2245)
          0.045168262 = weight(_text_:22 in 2245) [ClassicSimilarity], result of:
            0.045168262 = score(doc=2245,freq=2.0), product of:
              0.16677667 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047625583 = queryNorm
              0.2708308 = fieldWeight in 2245, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2245)
      0.33333334 = coord(1/3)
    
    Date
    1. 8.1996 22:01:00
    Footnote
    Übers. d. Titels: Compatibility of indexing languages: fusion, marriage or just living together? Some concrete examples
  18. Yee, R.; Beaubien, R.: ¬A preliminary crosswalk from METS to IMS content packaging (2004) 0.10
    0.09699409 = product of:
      0.14549112 = sum of:
        0.1261333 = weight(_text_:objects in 4752) [ClassicSimilarity], result of:
          0.1261333 = score(doc=4752,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.49828792 = fieldWeight in 4752, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=4752)
        0.019357827 = product of:
          0.038715653 = sum of:
            0.038715653 = weight(_text_:22 in 4752) [ClassicSimilarity], result of:
              0.038715653 = score(doc=4752,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.23214069 = fieldWeight in 4752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4752)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    As educational technology becomes pervasive, demand will grow for library content to be incorporated into courseware. Among the barriers impeding interoperability between libraries and educational tools is the difference in specifications commonly used for the exchange of digital objects and metadata. Among libraries, Metadata Encoding and Transmission Standard (METS) is a new but increasingly popular standard; the IMS content-package (IMS-CP) plays a parallel role in educational technology. This article describes how METS-encoded library content can be converted into digital objects for IMS-compliant systems through an XSLT-based crosswalk. The conceptual models behind METS and IMS-CP are compared, the design and limitations of an XSLT-based translation are described, and the crosswalks are related to other techniques to enhance interoperability.
    Source
    Library hi tech. 22(2004) no.1, S.69-81
  19. Ridenour, L.: Boundary objects : measuring gaps and overlap between research areas (2016) 0.10
    0.09699409 = product of:
      0.14549112 = sum of:
        0.1261333 = weight(_text_:objects in 2835) [ClassicSimilarity], result of:
          0.1261333 = score(doc=2835,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.49828792 = fieldWeight in 2835, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2835)
        0.019357827 = product of:
          0.038715653 = sum of:
            0.038715653 = weight(_text_:22 in 2835) [ClassicSimilarity], result of:
              0.038715653 = score(doc=2835,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.23214069 = fieldWeight in 2835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2835)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The aim of this paper is to develop methodology to determine conceptual overlap between research areas. It investigates patterns of terminology usage in scientific abstracts as boundary objects between research specialties. Research specialties were determined by high-level classifications assigned by Thomson Reuters in their Essential Science Indicators file, which provided a strictly hierarchical classification of journals into 22 categories. Results from the query "network theory" were downloaded from the Web of Science. From this file, two top-level groups, economics and social sciences, were selected and topically analyzed to provide a baseline of similarity on which to run an informetric analysis. The Places & Spaces Map of Science (Klavans and Boyack 2007) was used to determine the proximity of disciplines to one another in order to select the two disciplines use in the analysis. Groups analyzed share common theories and goals; however, groups used different language to describe their research. It was found that 61% of term words were shared between the two groups.
  20. Ortega, C.D.: Conceptual and procedural grounding of documentary systems (2012) 0.10
    0.09657719 = product of:
      0.14486578 = sum of:
        0.12873426 = weight(_text_:objects in 143) [ClassicSimilarity], result of:
          0.12873426 = score(doc=143,freq=6.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.508563 = fieldWeight in 143, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=143)
        0.016131522 = product of:
          0.032263044 = sum of:
            0.032263044 = weight(_text_:22 in 143) [ClassicSimilarity], result of:
              0.032263044 = score(doc=143,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.19345059 = fieldWeight in 143, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=143)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Documentary activities are informational operations of selection and representation of objects made from their features and predictable use. In order to make them more dynamic, these activities are carried out systemically, according to institutionally limited (in the sense of social institution) information projects. This organic approach leads to the constitution of information systems, or, more specifically, systems of documentary information, inasmuch as they refer to actions about documents as objects from which information is produced. Thus, systems of documentary information are called documentary systems. This article aims to list and systematize elements with the potential to a generalizing and categorical approach of documentary systems. We approach the systems according to: elements of reference (the documents and their information, the users, and the institutional context); constitutive elements (collection and references); structural elements (constituent units and the relation among them); modes of production (pre or post representation of the document); management aspects (flow of documents and of their information); and, finally, typology (management systems and information retrieval systems). Thus, documentary systems can be considered products due to operations involving objects institutionally limited for the production of collections (virtual or not) and their references, whose objective is the appropriation of information by the user.
    Content
    Beitrag einer Section "Selected Papers from the 1ST Brazilian Conference on Knowledge Organization And Representation, Faculdade de Ciência da Informação, Campus Universitário Darcy Ribeiro Brasília, DF Brasil, October 20-22, 2011" Vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_39_2012_3_h.pdf.

Languages

Types

  • a 3557
  • m 387
  • el 217
  • s 169
  • x 40
  • b 39
  • i 23
  • r 22
  • ? 9
  • n 4
  • p 4
  • d 3
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications