Search (4188 results, page 1 of 210)

  1. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.21
    0.20611626 = product of:
      0.6183488 = sum of:
        0.6183488 = sum of:
          0.59155583 = weight(_text_:fusions in 2752) [ClassicSimilarity], result of:
            0.59155583 = score(doc=2752,freq=10.0), product of:
              0.54400814 = queryWeight, product of:
                11.00374 = idf(docFreq=1, maxDocs=44218)
                0.049438477 = queryNorm
              1.0874026 = fieldWeight in 2752, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                11.00374 = idf(docFreq=1, maxDocs=44218)
                0.03125 = fieldNorm(doc=2752)
          0.026792923 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
            0.026792923 = score(doc=2752,freq=2.0), product of:
              0.17312512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049438477 = queryNorm
              0.15476047 = fieldWeight in 2752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2752)
      0.33333334 = coord(1/3)
    
    Abstract
    We report data fusion experiments carried out on the four best-performing retrieval models from TREC 5. Three were conceptually/algorithmically very different from one another; one was algorithmically similar to one of the former. The objective of the test was to observe the performance of the 11 logical data fusion combinations compared to the performance of the four individual models and their intermediate fusions when following the principle of polyrepresentation. This principle is based on cognitive IR perspective (Ingwersen & Järvelin, 2005) and implies that each retrieval model is regarded as a representation of a unique interpretation of information retrieval (IR). It predicts that only fusions of very different, but equally good, IR models may outperform each constituent as well as their intermediate fusions. Two kinds of experiments were carried out. One tested restricted fusions, which entails that only the inner disjoint overlap documents between fused models are ranked. The second set of experiments was based on traditional data fusion methods. The experiments involved the 30 TREC 5 topics that contain more than 44 relevant documents. In all tests, the Borda and CombSUM scoring methods were used. Performance was measured by precision and recall, with document cutoff values (DCVs) at 100 and 15 documents, respectively. Results show that restricted fusions made of two, three, or four cognitively/algorithmically very different retrieval models perform significantly better than do the individual models at DCV100. At DCV15, however, the results of polyrepresentative fusion were less predictable. The traditional fusion method based on polyrepresentation principles demonstrates a clear picture of performance at both DCV levels and verifies the polyrepresentation predictions for data fusion in IR. Data fusion improves retrieval performance over their constituent IR models only if the models all are quite conceptually/algorithmically dissimilar and equally and well performing, in that order of importance.
    Date
    22. 3.2009 18:48:28
  2. Rauber, A.: Digital preservation in data-driven science : on the importance of process capture, preservation and validation (2012) 0.19
    0.19490075 = product of:
      0.29235113 = sum of:
        0.1071816 = product of:
          0.3215448 = sum of:
            0.3215448 = weight(_text_:object's in 469) [ClassicSimilarity], result of:
              0.3215448 = score(doc=469,freq=2.0), product of:
                0.48969442 = queryWeight, product of:
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.049438477 = queryNorm
                0.65662336 = fieldWeight in 469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.046875 = fieldNorm(doc=469)
          0.33333334 = coord(1/3)
        0.18516952 = weight(_text_:objects in 469) [ClassicSimilarity], result of:
          0.18516952 = score(doc=469,freq=8.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.7046855 = fieldWeight in 469, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=469)
      0.6666667 = coord(2/3)
    
    Abstract
    Current digital preservation is strongly biased towards data objects: digital files of document-style objects, or encapsulated and largely self-contained objects. To provide authenticity and provenance information, comprehensive metadata models are deployed to document information on an object's context. Yet, we claim that simply documenting an objects context may not be sufficient to ensure proper provenance and to fulfill the stated preservation goals. Specifically in e-Science and business settings, capturing, documenting and preserving entire processes may be necessary to meet the preservation goals. We thus present an approach for capturing, documenting and preserving processes, and means to assess their authenticity upon re-execution. We will discuss options as well as limitations and open challenges to achieve sound preservation, speci?cally within scientific processes.
  3. Dick, S.J.: Astronomy's Three Kingdom System : a comprehensive classification system of celestial objects (2019) 0.16
    0.15964994 = product of:
      0.2394749 = sum of:
        0.21603109 = weight(_text_:objects in 5455) [ClassicSimilarity], result of:
          0.21603109 = score(doc=5455,freq=8.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.82213306 = fieldWeight in 5455, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5455)
        0.023443809 = product of:
          0.046887618 = sum of:
            0.046887618 = weight(_text_:22 in 5455) [ClassicSimilarity], result of:
              0.046887618 = score(doc=5455,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.2708308 = fieldWeight in 5455, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5455)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Although classification has been an important aspect of astronomy since stellar spectroscopy in the late nineteenth century, to date no comprehensive classification system has existed for all classes of objects in the universe. Here we present such a system, and lay out its foundational definitions and principles. The system consists of the "Three Kingdoms" of planets, stars and galaxies, eighteen families, and eighty-two classes of objects. Gravitation is the defining organizing principle for the families and classes, and the physical nature of the objects is the defining characteristic of the classes. The system should prove useful for both scientific and pedagogical purposes.
    Date
    21.11.2019 18:46:22
  4. Scott, M.L.: Dewey Decimal Classification, 22nd edition : a study manual and number building guide (2005) 0.16
    0.15545893 = product of:
      0.23318839 = sum of:
        0.19969724 = product of:
          0.5990917 = sum of:
            0.5990917 = weight(_text_:22nd in 4594) [ClassicSimilarity], result of:
              0.5990917 = score(doc=4594,freq=4.0), product of:
                0.43538073 = queryWeight, product of:
                  8.806516 = idf(docFreq=17, maxDocs=44218)
                  0.049438477 = queryNorm
                1.376018 = fieldWeight in 4594, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  8.806516 = idf(docFreq=17, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4594)
          0.33333334 = coord(1/3)
        0.033491157 = product of:
          0.066982314 = sum of:
            0.066982314 = weight(_text_:22 in 4594) [ClassicSimilarity], result of:
              0.066982314 = score(doc=4594,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.38690117 = fieldWeight in 4594, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4594)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This work has been fully updated for the 22nd edition of DDC, and is used as reference for the application of Dewey coding or as a course text in the Dewey System
    Object
    DDC-22
  5. Proffitt, M.: Pulling it all together : use of METS in RLG cultural materials service (2004) 0.13
    0.13424829 = product of:
      0.20137243 = sum of:
        0.1745795 = weight(_text_:objects in 767) [ClassicSimilarity], result of:
          0.1745795 = score(doc=767,freq=4.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.6643839 = fieldWeight in 767, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0625 = fieldNorm(doc=767)
        0.026792923 = product of:
          0.053585846 = sum of:
            0.053585846 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
              0.053585846 = score(doc=767,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.30952093 = fieldWeight in 767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=767)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    RLG has used METS for a particular application, that is as a wrapper for structural metadata. When RLG cultural materials was launched, there was no single way to deal with "complex digital objects". METS provides a standard means of encoding metadata regarding the digital objects represented in RCM, and METS has now been fully integrated into the workflow for this service.
    Source
    Library hi tech. 22(2004) no.1, S.65-68
  6. Johnson, E.H.: Using IODyne : Illustrations and examples (1998) 0.13
    0.13424829 = product of:
      0.20137243 = sum of:
        0.1745795 = weight(_text_:objects in 2341) [ClassicSimilarity], result of:
          0.1745795 = score(doc=2341,freq=4.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.6643839 = fieldWeight in 2341, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0625 = fieldNorm(doc=2341)
        0.026792923 = product of:
          0.053585846 = sum of:
            0.053585846 = weight(_text_:22 in 2341) [ClassicSimilarity], result of:
              0.053585846 = score(doc=2341,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.30952093 = fieldWeight in 2341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2341)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    IODyone is an Internet client program that allows one to retriev information from servers by dynamically combining information objects. Information objects are abstract representations of bibliographic data, typically titles (or title keywords), author names, subject and classification identifiers, and full-text search terms
    Date
    22. 9.1997 19:16:05
  7. Egghe, L.: Properties of the n-overlap vector and n-overlap similarity theory (2006) 0.13
    0.13228679 = product of:
      0.19843018 = sum of:
        0.089318 = product of:
          0.267954 = sum of:
            0.267954 = weight(_text_:object's in 194) [ClassicSimilarity], result of:
              0.267954 = score(doc=194,freq=2.0), product of:
                0.48969442 = queryWeight, product of:
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.049438477 = queryNorm
                0.54718614 = fieldWeight in 194, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=194)
          0.33333334 = coord(1/3)
        0.10911219 = weight(_text_:objects in 194) [ClassicSimilarity], result of:
          0.10911219 = score(doc=194,freq=4.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.41523993 = fieldWeight in 194, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=194)
      0.6666667 = coord(2/3)
    
    Abstract
    In the first part of this article the author defines the n-overlap vector whose coordinates consist of the fraction of the objects (e.g., books, N-grams, etc.) that belong to 1, 2, , n sets (more generally: families) (e.g., libraries, databases, etc.). With the aid of the Lorenz concentration theory, a theory of n-overlap similarity is conceived together with corresponding measures, such as the generalized Jaccard index (generalizing the well-known Jaccard index in case n 5 2). Next, the distributional form of the n-overlap vector is determined assuming certain distributions of the object's and of the set (family) sizes. In this section the decreasing power law and decreasing exponential distribution is explained for the n-overlap vector. Both item (token) n-overlap and source (type) n-overlap are studied. The n-overlap properties of objects indexed by a hierarchical system (e.g., books indexed by numbers from a UDC or Dewey system or by N-grams) are presented in the final section. The author shows how the results given in the previous section can be applied as well as how the Lorenz order of the n-overlap vector is respected by an increase or a decrease of the level of refinement in the hierarchical system (e.g., the value N in N-grams).
  8. Bordogna, G.; Pagani, M.: ¬A flexible content-based image retrieval model and a customizable system for the retrieval of shapes (2010) 0.13
    0.13228679 = product of:
      0.19843018 = sum of:
        0.089318 = product of:
          0.267954 = sum of:
            0.267954 = weight(_text_:object's in 3450) [ClassicSimilarity], result of:
              0.267954 = score(doc=3450,freq=2.0), product of:
                0.48969442 = queryWeight, product of:
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.049438477 = queryNorm
                0.54718614 = fieldWeight in 3450, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3450)
          0.33333334 = coord(1/3)
        0.10911219 = weight(_text_:objects in 3450) [ClassicSimilarity], result of:
          0.10911219 = score(doc=3450,freq=4.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.41523993 = fieldWeight in 3450, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3450)
      0.6666667 = coord(2/3)
    
    Abstract
    The authors describe a flexible model and a system for content-based image retrieval of objects' shapes. Flexibility is intended as the possibility of customizing the system behavior to the user's needs and perceptions. This is achieved by allowing users to modify the retrieval function. The system implementing this model uses multiple representations to characterize some macroscopic characteristics of the objects shapes. Specifically, the shape indexes describe the global features of the object's contour (represented by the Fourier coefficients), the contour's irregularities (represented by the multifractal spectrum), and the presence of concavities and convexities (represented by the contour scale space distribution). During a query formulation, the user can specify both the preference for the macroscopic shape aspects that he or she considers meaningful for the retrieval, and the desired level of accuracy of the matching, which means that the visual query shape must be considered with a given tolerance in representing the desired shapes. The evaluation experiments showed that this system can be suited to different retrieval behaviors, and that, generally, the combination of the multiple shape representations increases both recall and precision with respect to the application of any single representation.
  9. Srinivasan, R.; Boast, R.; Becvar, K.M.; Furner, J.: Blobgects : digital museum catalogs and diverse user communities (2009) 0.13
    0.12617809 = product of:
      0.18926711 = sum of:
        0.17252153 = weight(_text_:objects in 2754) [ClassicSimilarity], result of:
          0.17252153 = score(doc=2754,freq=10.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.656552 = fieldWeight in 2754, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2754)
        0.016745578 = product of:
          0.033491157 = sum of:
            0.033491157 = weight(_text_:22 in 2754) [ClassicSimilarity], result of:
              0.033491157 = score(doc=2754,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.19345059 = fieldWeight in 2754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2754)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article presents an exploratory study of Blobgects, an experimental interface for an online museum catalog that enables social tagging and blogging activity around a set of cultural heritage objects held by a preeminent museum of anthropology and archaeology. This study attempts to understand not just whether social tagging and commenting about these objects is useful but rather whose tags and voices matter in presenting different expert perspectives around digital museum objects. Based on an empirical comparison between two different user groups (Canadian Inuit high-school students and museum studies students in the United States), we found that merely adding the ability to tag and comment to the museum's catalog does not sufficiently allow users to learn about or engage with the objects represented by catalog entries. Rather, the specialist language of the catalog provides too little contextualization for users to enter into the sort of dialog that proponents of Web 2.0 technologies promise. Overall, we propose a more nuanced application of Web 2.0 technologies within museums - one which provides a contextual basis that gives users a starting point for engagement and permits users to make sense of objects in relation to their own needs, uses, and understandings.
    Date
    22. 3.2009 18:52:32
  10. Holetschek, J. et al.: Natural history in Europeana : accessing scientific collection objects via LOD (2016) 0.13
    0.1251994 = product of:
      0.1877991 = sum of:
        0.15430793 = weight(_text_:objects in 3277) [ClassicSimilarity], result of:
          0.15430793 = score(doc=3277,freq=2.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.58723795 = fieldWeight in 3277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.078125 = fieldNorm(doc=3277)
        0.033491157 = product of:
          0.066982314 = sum of:
            0.066982314 = weight(_text_:22 in 3277) [ClassicSimilarity], result of:
              0.066982314 = score(doc=3277,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.38690117 = fieldWeight in 3277, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3277)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  11. Falquet, G.; Guyot, J.; Nerima, L.: Languages and tools to specify hypertext views on databases (1999) 0.12
    0.12030415 = product of:
      0.18045622 = sum of:
        0.16036153 = weight(_text_:objects in 3968) [ClassicSimilarity], result of:
          0.16036153 = score(doc=3968,freq=6.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.6102756 = fieldWeight in 3968, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=3968)
        0.020094693 = product of:
          0.040189385 = sum of:
            0.040189385 = weight(_text_:22 in 3968) [ClassicSimilarity], result of:
              0.040189385 = score(doc=3968,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.23214069 = fieldWeight in 3968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3968)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present a declarative language for the construction of hypertext views on databases. The language is based on an object-oriented data model and a simple hypertext model with reference and inclusion links. A hypertext view specification consists in a collection of parameterized node schemes which specify how to construct node and links instances from the database contents. We show how this language can express different issues in hypertext view design. These include: the direct mapping of objects to nodes; the construction of complex nodes based on sets of objects; the representation of polymorphic sets of objects; and the representation of tree and graph structures. We have defined sublanguages corresponding to particular database models (relational, semantic, object-oriented) and implemented tools to generate Web views for these database models
    Date
    21.10.2000 15:01:22
  12. Yee, M.M.: What is a work? : part 1: the user and the objects of the catalog (1994) 0.12
    0.11746725 = product of:
      0.17620087 = sum of:
        0.15275706 = weight(_text_:objects in 735) [ClassicSimilarity], result of:
          0.15275706 = score(doc=735,freq=4.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.5813359 = fieldWeight in 735, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=735)
        0.023443809 = product of:
          0.046887618 = sum of:
            0.046887618 = weight(_text_:22 in 735) [ClassicSimilarity], result of:
              0.046887618 = score(doc=735,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.2708308 = fieldWeight in 735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=735)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Part 1 of a series of articles, exploring the concept of 'the work' in cataloguing practice, which attempts to construct a definition of the term based on AACR theory and practice. The study begins with a consideration of the objects of the catalogue, their history and the evidence that bears on the question of the degree to which the user needs access to the work, as opposed to a particular edition of the work
    Footnote
    Vgl. auch: Pt.2: Cataloging and classification quarterly. 19(1994) no.2, S.5-22; Pt.3: Cataloging and classification quarterly. 20(1995) no.1, S.25-46; Pt.4: Cataloging and classification quarterly. 20(1995) no.2, S.3-24
  13. Benoit, G.; Hussey, L.: Repurposing digital objects : case studies across the publishing industry (2011) 0.12
    0.11746725 = product of:
      0.17620087 = sum of:
        0.15275706 = weight(_text_:objects in 4198) [ClassicSimilarity], result of:
          0.15275706 = score(doc=4198,freq=4.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.5813359 = fieldWeight in 4198, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4198)
        0.023443809 = product of:
          0.046887618 = sum of:
            0.046887618 = weight(_text_:22 in 4198) [ClassicSimilarity], result of:
              0.046887618 = score(doc=4198,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.2708308 = fieldWeight in 4198, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4198)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Large, data-rich organizations have tremendously large collections of digital objects to be "repurposed," to respond quickly and economically to publishing, marketing, and information needs. Some management typically assume that a content management system, or some other technique such as OWL and RDF, will automatically address the workflow and technical issues associated with this reuse. Four case studies show that the sources of some roadblocks to agile repurposing are as much managerial and organizational as they are technical in nature. The review concludes with suggestions on how digital object repurposing can be integrated given these organizations' structures.
    Date
    22. 1.2011 14:23:07
  14. Forsyth, D.A.: Finding pictures of objects in large collections of images (1997) 0.11
    0.106235206 = product of:
      0.15935281 = sum of:
        0.13093463 = weight(_text_:objects in 763) [ClassicSimilarity], result of:
          0.13093463 = score(doc=763,freq=4.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.49828792 = fieldWeight in 763, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=763)
        0.028418189 = product of:
          0.056836378 = sum of:
            0.056836378 = weight(_text_:22 in 763) [ClassicSimilarity], result of:
              0.056836378 = score(doc=763,freq=4.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.32829654 = fieldWeight in 763, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=763)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes an approach to the problem of object recognition structured around a sequence of increasingly specialised grouping activities that assemble coherent regions of images that can be sown to satisfy increasingly stringent conditions. The recognition system is designed to cope with: colour and texture; the ability to deal with general objects in uncontrolled configurations and contexts; and a satisfactory notion of classification. These properties are illustrated using 3 case studies, demonstrating: the use of descriptions that fuse colour and spatial properties; the use of fusion of texture and geometric properties to describes trees; and the use of a recognition system to determine accurately whether an image contains people and animals
    Date
    22. 9.1997 19:16:05
    3. 1.1999 12:21:22
  15. Yee, R.; Beaubien, R.: ¬A preliminary crosswalk from METS to IMS content packaging (2004) 0.10
    0.100686215 = product of:
      0.15102932 = sum of:
        0.13093463 = weight(_text_:objects in 4752) [ClassicSimilarity], result of:
          0.13093463 = score(doc=4752,freq=4.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.49828792 = fieldWeight in 4752, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=4752)
        0.020094693 = product of:
          0.040189385 = sum of:
            0.040189385 = weight(_text_:22 in 4752) [ClassicSimilarity], result of:
              0.040189385 = score(doc=4752,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.23214069 = fieldWeight in 4752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4752)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    As educational technology becomes pervasive, demand will grow for library content to be incorporated into courseware. Among the barriers impeding interoperability between libraries and educational tools is the difference in specifications commonly used for the exchange of digital objects and metadata. Among libraries, Metadata Encoding and Transmission Standard (METS) is a new but increasingly popular standard; the IMS content-package (IMS-CP) plays a parallel role in educational technology. This article describes how METS-encoded library content can be converted into digital objects for IMS-compliant systems through an XSLT-based crosswalk. The conceptual models behind METS and IMS-CP are compared, the design and limitations of an XSLT-based translation are described, and the crosswalks are related to other techniques to enhance interoperability.
    Source
    Library hi tech. 22(2004) no.1, S.69-81
  16. Ridenour, L.: Boundary objects : measuring gaps and overlap between research areas (2016) 0.10
    0.100686215 = product of:
      0.15102932 = sum of:
        0.13093463 = weight(_text_:objects in 2835) [ClassicSimilarity], result of:
          0.13093463 = score(doc=2835,freq=4.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.49828792 = fieldWeight in 2835, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2835)
        0.020094693 = product of:
          0.040189385 = sum of:
            0.040189385 = weight(_text_:22 in 2835) [ClassicSimilarity], result of:
              0.040189385 = score(doc=2835,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.23214069 = fieldWeight in 2835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2835)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The aim of this paper is to develop methodology to determine conceptual overlap between research areas. It investigates patterns of terminology usage in scientific abstracts as boundary objects between research specialties. Research specialties were determined by high-level classifications assigned by Thomson Reuters in their Essential Science Indicators file, which provided a strictly hierarchical classification of journals into 22 categories. Results from the query "network theory" were downloaded from the Web of Science. From this file, two top-level groups, economics and social sciences, were selected and topically analyzed to provide a baseline of similarity on which to run an informetric analysis. The Places & Spaces Map of Science (Klavans and Boyack 2007) was used to determine the proximity of disciplines to one another in order to select the two disciplines use in the analysis. Groups analyzed share common theories and goals; however, groups used different language to describe their research. It was found that 61% of term words were shared between the two groups.
  17. Ortega, C.D.: Conceptual and procedural grounding of documentary systems (2012) 0.10
    0.100253455 = product of:
      0.15038018 = sum of:
        0.1336346 = weight(_text_:objects in 143) [ClassicSimilarity], result of:
          0.1336346 = score(doc=143,freq=6.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.508563 = fieldWeight in 143, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=143)
        0.016745578 = product of:
          0.033491157 = sum of:
            0.033491157 = weight(_text_:22 in 143) [ClassicSimilarity], result of:
              0.033491157 = score(doc=143,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.19345059 = fieldWeight in 143, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=143)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Documentary activities are informational operations of selection and representation of objects made from their features and predictable use. In order to make them more dynamic, these activities are carried out systemically, according to institutionally limited (in the sense of social institution) information projects. This organic approach leads to the constitution of information systems, or, more specifically, systems of documentary information, inasmuch as they refer to actions about documents as objects from which information is produced. Thus, systems of documentary information are called documentary systems. This article aims to list and systematize elements with the potential to a generalizing and categorical approach of documentary systems. We approach the systems according to: elements of reference (the documents and their information, the users, and the institutional context); constitutive elements (collection and references); structural elements (constituent units and the relation among them); modes of production (pre or post representation of the document); management aspects (flow of documents and of their information); and, finally, typology (management systems and information retrieval systems). Thus, documentary systems can be considered products due to operations involving objects institutionally limited for the production of collections (virtual or not) and their references, whose objective is the appropriation of information by the user.
    Content
    Beitrag einer Section "Selected Papers from the 1ST Brazilian Conference on Knowledge Organization And Representation, Faculdade de Ciência da Informação, Campus Universitário Darcy Ribeiro Brasília, DF Brasil, October 20-22, 2011" Vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_39_2012_3_h.pdf.
  18. Understanding metadata (2004) 0.10
    0.10015952 = product of:
      0.15023927 = sum of:
        0.123446345 = weight(_text_:objects in 2686) [ClassicSimilarity], result of:
          0.123446345 = score(doc=2686,freq=2.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.46979034 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
        0.026792923 = product of:
          0.053585846 = sum of:
            0.053585846 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
              0.053585846 = score(doc=2686,freq=2.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.30952093 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  19. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.10
    0.09505898 = product of:
      0.14258847 = sum of:
        0.06543451 = product of:
          0.19630352 = sum of:
            0.19630352 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.19630352 = score(doc=76,freq=2.0), product of:
                0.41913995 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049438477 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.33333334 = coord(1/3)
        0.077153966 = weight(_text_:objects in 76) [ClassicSimilarity], result of:
          0.077153966 = score(doc=76,freq=2.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.29361898 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
      0.6666667 = coord(2/3)
    
    Abstract
    A summary of brain theory is given so far as it is contained within the framework of Localization Theory. Difficulties of this "conventional theory" are traced back to a specific deficiency: there is no way to express relations between active cells (as for instance their representing parts of the same object). A new theory is proposed to cure this deficiency. It introduces a new kind of dynamical control, termed synaptic modulation, according to which synapses switch between a conducting and a non- conducting state. The dynamics of this variable is controlled on a fast time scale by correlations in the temporal fine structure of cellular signals. Furthermore, conventional synaptic plasticity is replaced by a refined version. Synaptic modulation and plasticity form the basis for short-term and long-term memory, respectively. Signal correlations, shaped by the variable network, express structure and relationships within objects. In particular, the figure-ground problem may be solved in this way. Synaptic modulation introduces exibility into cerebral networks which is necessary to solve the invariance problem. Since momentarily useless connections are deactivated, interference between di erent memory traces can be reduced, and memory capacity increased, in comparison with conventional associative memory
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
  20. Chowdhury, G.G.; Neelameghan, A.; Chowdhury, S.: VOCON: Vocabulary control online in MicroIsis databases (1995) 0.09
    0.0941134 = product of:
      0.1411701 = sum of:
        0.108015545 = weight(_text_:objects in 1087) [ClassicSimilarity], result of:
          0.108015545 = score(doc=1087,freq=2.0), product of:
            0.262769 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.049438477 = queryNorm
            0.41106653 = fieldWeight in 1087, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1087)
        0.03315455 = product of:
          0.0663091 = sum of:
            0.0663091 = weight(_text_:22 in 1087) [ClassicSimilarity], result of:
              0.0663091 = score(doc=1087,freq=4.0), product of:
                0.17312512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049438477 = queryNorm
                0.38301262 = fieldWeight in 1087, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1087)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Discusses the need for facilities for online vocabulary control and standardization of terms, codes, etc., so as to secure consistency in naming of subjects, objects, countries, languages, etc., in databases at data entry stage. Most information storage and retrieval packages for microcomputers including MicroIsis provide for online vocabulary control in formulating search expressions for information retrieval, but not at the data entry stage. VOCON.PAS is a Pascal interface program for use with MicroIsis software for (a) online selection of term(s) and/or code(s) from vocabulary control tool, such as, thesaurus, subject heading list, classification scheme, nomenclature list(s)
    Source
    Knowledge organization. 22(1995) no.1, S.18-22

Languages

Types

  • a 3513
  • m 382
  • el 219
  • s 164
  • b 40
  • x 40
  • i 23
  • r 22
  • ? 8
  • n 4
  • p 4
  • d 3
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications