Search (1498 results, page 1 of 75)

  • × year_i:[2000 TO 2010}
  1. Schumann, A.: Bereit für XHTML (2000) 0.20
    0.19531444 = product of:
      0.29297164 = sum of:
        0.14864951 = weight(_text_:objects in 2297) [ClassicSimilarity], result of:
          0.14864951 = score(doc=2297,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.58723795 = fieldWeight in 2297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.078125 = fieldNorm(doc=2297)
        0.14432213 = product of:
          0.28864425 = sum of:
            0.28864425 = weight(_text_:fusion in 2297) [ClassicSimilarity], result of:
              0.28864425 = score(doc=2297,freq=2.0), product of:
                0.35273543 = queryWeight, product of:
                  7.406428 = idf(docFreq=72, maxDocs=44218)
                  0.047625583 = queryNorm
                0.8183024 = fieldWeight in 2297, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.406428 = idf(docFreq=72, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2297)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Object
    Net Objects Fusion
  2. Proffitt, M.: Pulling it all together : use of METS in RLG cultural materials service (2004) 0.13
    0.12932545 = product of:
      0.19398816 = sum of:
        0.16817772 = weight(_text_:objects in 767) [ClassicSimilarity], result of:
          0.16817772 = score(doc=767,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.6643839 = fieldWeight in 767, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0625 = fieldNorm(doc=767)
        0.025810435 = product of:
          0.05162087 = sum of:
            0.05162087 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
              0.05162087 = score(doc=767,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.30952093 = fieldWeight in 767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=767)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    RLG has used METS for a particular application, that is as a wrapper for structural metadata. When RLG cultural materials was launched, there was no single way to deal with "complex digital objects". METS provides a standard means of encoding metadata regarding the digital objects represented in RCM, and METS has now been fully integrated into the workflow for this service.
    Source
    Library hi tech. 22(2004) no.1, S.65-68
  3. Srinivasan, R.; Boast, R.; Becvar, K.M.; Furner, J.: Blobgects : digital museum catalogs and diverse user communities (2009) 0.12
    0.121551156 = product of:
      0.18232673 = sum of:
        0.16619521 = weight(_text_:objects in 2754) [ClassicSimilarity], result of:
          0.16619521 = score(doc=2754,freq=10.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.656552 = fieldWeight in 2754, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2754)
        0.016131522 = product of:
          0.032263044 = sum of:
            0.032263044 = weight(_text_:22 in 2754) [ClassicSimilarity], result of:
              0.032263044 = score(doc=2754,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.19345059 = fieldWeight in 2754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2754)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article presents an exploratory study of Blobgects, an experimental interface for an online museum catalog that enables social tagging and blogging activity around a set of cultural heritage objects held by a preeminent museum of anthropology and archaeology. This study attempts to understand not just whether social tagging and commenting about these objects is useful but rather whose tags and voices matter in presenting different expert perspectives around digital museum objects. Based on an empirical comparison between two different user groups (Canadian Inuit high-school students and museum studies students in the United States), we found that merely adding the ability to tag and comment to the museum's catalog does not sufficiently allow users to learn about or engage with the objects represented by catalog entries. Rather, the specialist language of the catalog provides too little contextualization for users to enter into the sort of dialog that proponents of Web 2.0 technologies promise. Overall, we propose a more nuanced application of Web 2.0 technologies within museums - one which provides a contextual basis that gives users a starting point for engagement and permits users to make sense of objects in relation to their own needs, uses, and understandings.
    Date
    22. 3.2009 18:52:32
  4. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.12
    0.117458045 = product of:
      0.35237414 = sum of:
        0.35237414 = sum of:
          0.3265637 = weight(_text_:fusion in 2752) [ClassicSimilarity], result of:
            0.3265637 = score(doc=2752,freq=16.0), product of:
              0.35273543 = queryWeight, product of:
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.047625583 = queryNorm
              0.9258035 = fieldWeight in 2752, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                7.406428 = idf(docFreq=72, maxDocs=44218)
                0.03125 = fieldNorm(doc=2752)
          0.025810435 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
            0.025810435 = score(doc=2752,freq=2.0), product of:
              0.16677667 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047625583 = queryNorm
              0.15476047 = fieldWeight in 2752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2752)
      0.33333334 = coord(1/3)
    
    Abstract
    We report data fusion experiments carried out on the four best-performing retrieval models from TREC 5. Three were conceptually/algorithmically very different from one another; one was algorithmically similar to one of the former. The objective of the test was to observe the performance of the 11 logical data fusion combinations compared to the performance of the four individual models and their intermediate fusions when following the principle of polyrepresentation. This principle is based on cognitive IR perspective (Ingwersen & Järvelin, 2005) and implies that each retrieval model is regarded as a representation of a unique interpretation of information retrieval (IR). It predicts that only fusions of very different, but equally good, IR models may outperform each constituent as well as their intermediate fusions. Two kinds of experiments were carried out. One tested restricted fusions, which entails that only the inner disjoint overlap documents between fused models are ranked. The second set of experiments was based on traditional data fusion methods. The experiments involved the 30 TREC 5 topics that contain more than 44 relevant documents. In all tests, the Borda and CombSUM scoring methods were used. Performance was measured by precision and recall, with document cutoff values (DCVs) at 100 and 15 documents, respectively. Results show that restricted fusions made of two, three, or four cognitively/algorithmically very different retrieval models perform significantly better than do the individual models at DCV100. At DCV15, however, the results of polyrepresentative fusion were less predictable. The traditional fusion method based on polyrepresentation principles demonstrates a clear picture of performance at both DCV levels and verifies the polyrepresentation predictions for data fusion in IR. Data fusion improves retrieval performance over their constituent IR models only if the models all are quite conceptually/algorithmically dissimilar and equally and well performing, in that order of importance.
    Date
    22. 3.2009 18:48:28
  5. Yee, R.; Beaubien, R.: ¬A preliminary crosswalk from METS to IMS content packaging (2004) 0.10
    0.09699409 = product of:
      0.14549112 = sum of:
        0.1261333 = weight(_text_:objects in 4752) [ClassicSimilarity], result of:
          0.1261333 = score(doc=4752,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.49828792 = fieldWeight in 4752, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=4752)
        0.019357827 = product of:
          0.038715653 = sum of:
            0.038715653 = weight(_text_:22 in 4752) [ClassicSimilarity], result of:
              0.038715653 = score(doc=4752,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.23214069 = fieldWeight in 4752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4752)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    As educational technology becomes pervasive, demand will grow for library content to be incorporated into courseware. Among the barriers impeding interoperability between libraries and educational tools is the difference in specifications commonly used for the exchange of digital objects and metadata. Among libraries, Metadata Encoding and Transmission Standard (METS) is a new but increasingly popular standard; the IMS content-package (IMS-CP) plays a parallel role in educational technology. This article describes how METS-encoded library content can be converted into digital objects for IMS-compliant systems through an XSLT-based crosswalk. The conceptual models behind METS and IMS-CP are compared, the design and limitations of an XSLT-based translation are described, and the crosswalks are related to other techniques to enhance interoperability.
    Source
    Library hi tech. 22(2004) no.1, S.69-81
  6. Understanding metadata (2004) 0.10
    0.09648669 = product of:
      0.14473003 = sum of:
        0.1189196 = weight(_text_:objects in 2686) [ClassicSimilarity], result of:
          0.1189196 = score(doc=2686,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.46979034 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
        0.025810435 = product of:
          0.05162087 = sum of:
            0.05162087 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
              0.05162087 = score(doc=2686,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.30952093 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  7. Bates, M.J.: Fundamental forms of information (2006) 0.09
    0.09066229 = product of:
      0.13599344 = sum of:
        0.10405465 = weight(_text_:objects in 2746) [ClassicSimilarity], result of:
          0.10405465 = score(doc=2746,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.41106653 = fieldWeight in 2746, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2746)
        0.031938784 = product of:
          0.06387757 = sum of:
            0.06387757 = weight(_text_:22 in 2746) [ClassicSimilarity], result of:
              0.06387757 = score(doc=2746,freq=4.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.38301262 = fieldWeight in 2746, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2746)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Fundamental forms of information, as well as the term information itself, are defined and developed for the purposes of information science/studies. Concepts of natural and represented information (taking an unconventional sense of representation), encoded and embodied information, as well as experienced, enacted, expressed, embedded, recorded, and trace information are elaborated. The utility of these terms for the discipline is illustrated with examples from the study of information-seeking behavior and of information genres. Distinctions between the information and curatorial sciences with respect to their social (and informational) objects of study are briefly outlined.
    Date
    22. 3.2009 18:15:22
  8. Lubas, R.L.; Wolfe, R.H.W.; Fleischman, M.: Creating metadata practices for MIT's OpenCourseWare Project (2004) 0.08
    0.08442586 = product of:
      0.12663879 = sum of:
        0.10405465 = weight(_text_:objects in 2843) [ClassicSimilarity], result of:
          0.10405465 = score(doc=2843,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.41106653 = fieldWeight in 2843, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2843)
        0.022584131 = product of:
          0.045168262 = sum of:
            0.045168262 = weight(_text_:22 in 2843) [ClassicSimilarity], result of:
              0.045168262 = score(doc=2843,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.2708308 = fieldWeight in 2843, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2843)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The MIT libraries were called upon to recommend a metadata scheme for the resources contained in MIT's OpenCourseWare (OCW) project. The resources in OCW needed descriptive, structural, and technical metadata. The SCORM standard, which uses IEEE Learning Object Metadata for its descriptive standard, was selected for its focus on educational objects. However, it was clear that the Libraries would need to recommend how the standard would be applied and adapted to accommodate needs that were not addressed in the standard's specifications. The newly formed MIT Libraries Metadata Unit adapted established practices from AACR2 and MARC traditions when facing situations in which there were no precedents to follow.
    Source
    Library hi tech. 22(2004) no.2, S.138-143
  9. Madison, O.M.A.: ¬The IFLA Functional Requirements for Bibliographic Records : international standards for bibliographic control (2000) 0.08
    0.0808284 = product of:
      0.1212426 = sum of:
        0.10511108 = weight(_text_:objects in 187) [ClassicSimilarity], result of:
          0.10511108 = score(doc=187,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.41523993 = fieldWeight in 187, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=187)
        0.016131522 = product of:
          0.032263044 = sum of:
            0.032263044 = weight(_text_:22 in 187) [ClassicSimilarity], result of:
              0.032263044 = score(doc=187,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.19345059 = fieldWeight in 187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=187)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The formal charge for the IFLA study involving international bibliography standards was to delineate the functions that are performed by the bibliographic record with respect to various media, applications, and user needs. The method used was the entity relationship analysis technique. Three groups of entities that are the key objects of interest to users of bibliographic records were defined. The primary group contains four entities: work, expression, manifestation, and item. The second group includes entities responsible for the intellectual or artistic content, production, or ownership of entities in the first group. The third group includes entities that represent concepts, objects, events, and places. In the study we identified the attributes associated with each entity and the relationships that are most important to users. The attributes and relationships were mapped to the functional requirements for bibliographic records that were defined in terms of four user tasks: to find, identify, select, and obtain. Basic requirements for national bibliographic records were recommended based on the entity analysis. The recommendations of the study are compared with two standards, AACR (Anglo-American Cataloguing Rules) and the Dublin Core, to place them into pragmatic context. The results of the study are being used in the review of the complete set of ISBDs as the initial benchmark in determining data elements for each format.
    Date
    10. 9.2000 17:38:22
  10. Raper, J.: Geographic relevance (2007) 0.08
    0.0808284 = product of:
      0.1212426 = sum of:
        0.10511108 = weight(_text_:objects in 846) [ClassicSimilarity], result of:
          0.10511108 = score(doc=846,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.41523993 = fieldWeight in 846, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=846)
        0.016131522 = product of:
          0.032263044 = sum of:
            0.032263044 = weight(_text_:22 in 846) [ClassicSimilarity], result of:
              0.032263044 = score(doc=846,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.19345059 = fieldWeight in 846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=846)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - The purpose of this paper concerns the dimensions of relevance in information retrieval systems and their completeness in new retrieval contexts such as mobile search. Geography as a factor in relevance is little understood and information seeking is assumed to take place in indoor environments. Yet the rise of information seeking on the move using mobile devices implies the need to better understand the kind of situational relevance operating in this kind of context. Design/methodology/approach - The paper outlines and explores a geographic information seeking process in which geographic information needs (conditioned by needs and tasks, in context) drive the acquisition and use of geographic information objects, which in turn influence geographic behaviour in the environment. Geographic relevance is defined as "a relation between a geographic information need" (like an attention span) and "the spatio-temporal expression of the geographic information objects needed to satisfy it" (like an area of influence). Some empirical examples are given to indicate the theoretical and practical application of this work. Findings - The paper sets out definitions of geographical information needs based on cognitive and geographic criteria, and proposes four canonical cases, which might be theorised as anomalous states of geographic knowledge (ASGK). The paper argues that geographic relevance is best defined as a spatio-temporally extended relation between information need (an "attention" span) and geographic information object (a zone of "influence"), and it defines four domains of geographic relevance. Finally a model of geographic relevance is suggested in which attention and influence are modelled as map layers whose intersection can define the nature of the relation. Originality/value - Geographic relevance is a new field of research that has so far been poorly defined and little researched. This paper sets out new principles for the study of geographic information behaviour.
    Date
    23.12.2007 14:22:24
  11. Ku, L.-W.; Ho, H.-W.; Chen, H.-H.: Opinion mining and relationship discovery using CopeOpi opinion analysis system (2009) 0.08
    0.0808284 = product of:
      0.1212426 = sum of:
        0.10511108 = weight(_text_:objects in 2938) [ClassicSimilarity], result of:
          0.10511108 = score(doc=2938,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.41523993 = fieldWeight in 2938, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2938)
        0.016131522 = product of:
          0.032263044 = sum of:
            0.032263044 = weight(_text_:22 in 2938) [ClassicSimilarity], result of:
              0.032263044 = score(doc=2938,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.19345059 = fieldWeight in 2938, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2938)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present CopeOpi, an opinion-analysis system, which extracts from the Web opinions about specific targets, summarizes the polarity and strength of these opinions, and tracks opinion variations over time. Objects that yield similar opinion tendencies over a certain time period may be correlated due to the latent causal events. CopeOpi discovers relationships among objects based on their opinion-tracking plots and collocations. Event bursts are detected from the tracking plots, and the strength of opinion relationships is determined by the coverage of these plots. To evaluate opinion mining, we use the NTCIR corpus annotated with opinion information at sentence and document levels. CopeOpi achieves sentence- and document-level f-measures of 62% and 74%. For relationship discovery, we collected 1.3M economics-related documents from 93 Web sources over 22 months, and analyzed collocation-based, opinion-based, and hybrid models. We consider as correlated company pairs that demonstrate similar stock-price variations, and selected these as the gold standard for evaluation. Results show that opinion-based and collocation-based models complement each other, and that integrated models perform the best. The top 25, 50, and 100 pairs discovered achieve precision rates of 1, 0.92, and 0.79, respectively.
  12. Galloway, P.: Preservation of digital objects (2003) 0.07
    0.07432476 = product of:
      0.22297426 = sum of:
        0.22297426 = weight(_text_:objects in 4275) [ClassicSimilarity], result of:
          0.22297426 = score(doc=4275,freq=18.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.8808569 = fieldWeight in 4275, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4275)
      0.33333334 = coord(1/3)
    
    Abstract
    The preservation of digital objects (defined here as objects in digital form that require a computer to support their existence and display) is obviously an important practical issue for the information professions, with its importance growing daily as more information objects are produced in, or converted to, digital form. Yakel's (2001) review of the field provided a much-needed introduction. At the same time, the complexity of new digital objects continues to increase, challenging existing preservation efforts (Lee, Skattery, Lu, Tang, & McCrary, 2002). The field of information science itself is beginning to pay some reflexive attention to the creation of fragile and unpreservable digital objects. But these concerns focus often an the practical problems of short-term repurposing of digital objects rather than actual preservation, by which I mean the activity of carrying digital objects from one software generation to another, undertaken for purposes beyond the original reasons for creating the objects. For preservation in this sense to be possible, information science as a discipline needs to be active in the formulation of, and advocacy for, national information policies. Such policies will need to challenge the predominant cultural expectation of planned obsolescence for information resources, and cultural artifacts in general.
  13. Shepherd, M.; Watters, C.: Boundary objects and the digital library (2006) 0.07
    0.07282309 = product of:
      0.21846926 = sum of:
        0.21846926 = weight(_text_:objects in 1490) [ClassicSimilarity], result of:
          0.21846926 = score(doc=1490,freq=12.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.86305994 = fieldWeight in 1490, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=1490)
      0.33333334 = coord(1/3)
    
    Abstract
    Boundary objects are entities shared by different communities but used differently by each group. The paper explores the multi faceted aspects of boundary objects in digital libraries. The issue of semantic interoperability from the perspective of 'communities of practice' and 'communities of interest' has been explored. While the concept of boundary objects holds some promise of resolving this problem, an efficient solution depends on how knowledge is represented so that it can be shared among various participants in a meaningful manner. Classification schemes can be used as a standard to implement boundary objects to bridge access to shared information resources for different users. The value and utility of adoption of "Absolute Syntax" for representation of subjects as a framework for boundary objects needs to be explored.
  14. Bueno-de-la-Fuente, G.; Hernández-Pérez, T.; Rodríguez-Mateos, D.; Méndez-Rodríguez, E.M.; Martín-Galán, B.: Study on the use of metadata for digital learning objects in University Institutional Repositories (MODERI) (2009) 0.07
    0.07282309 = product of:
      0.21846926 = sum of:
        0.21846926 = weight(_text_:objects in 2981) [ClassicSimilarity], result of:
          0.21846926 = score(doc=2981,freq=12.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.86305994 = fieldWeight in 2981, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2981)
      0.33333334 = coord(1/3)
    
    Abstract
    Metadata is a core issue for the creation of repositories. Different institutional repositories have chosen and use different metadata models, elements and values for describing the range of digital objects they store. Thus, this paper analyzes the current use of metadata describing those Learning Objects that some open higher educational institutions' repositories include in their collections. The goal of this work is to identify and analyze the different metadata models being used to describe educational features of those specific digital educational objects (such as audience, type of educational material, learning objectives, etc.). Also discussed is the concept and typology of Learning Objects (LO) through their use in University Repositories. We will also examine the usefulness of specifically describing those learning objects, setting them apart from other kind of documents included in the repository, mainly scholarly publications and research results of the Higher Education institution.
  15. Li, X.: Designing an interactive Web tutorial with cross-browser dynamic HTML (2000) 0.07
    0.072365016 = product of:
      0.10854752 = sum of:
        0.0891897 = weight(_text_:objects in 4897) [ClassicSimilarity], result of:
          0.0891897 = score(doc=4897,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.35234275 = fieldWeight in 4897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=4897)
        0.019357827 = product of:
          0.038715653 = sum of:
            0.038715653 = weight(_text_:22 in 4897) [ClassicSimilarity], result of:
              0.038715653 = score(doc=4897,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.23214069 = fieldWeight in 4897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4897)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Texas A&M University Libraries developed a Web-based training (WBT) application for LandView III, a federal depository CD-ROM publication using cross-browser dynamic HTML (DHTML) and other Web technologies. The interactive and self-paced tutorial demonstrates the major features of the CD-ROM and shows how to navigate the programs. The tutorial features dynamic HTML techniques, such as hiding, showing and moving layers; dragging objects; and windows-style drop-down menus. It also integrates interactive forms, common gateway interface (CGI), frames, and animated GIF images in the design of the WBT. After describing the design and implementation of the tutorial project, an evaluation of usage statistics and user feedback was conducted, as well as an assessment of its strengths and weaknesses, and a comparison of this tutorial with other common types of training methods. The present article describes an innovative approach for CD-ROM training using advanced Web technologies such as dynamic HTML, which can simulate and demonstrate the interactive use of the CD-ROM, as well as the actual search process of a database.
    Date
    28. 1.2006 19:21:22
  16. Winget, M.A.: Annotations on musical scores by performing musicians : collaborative models, interactive methods, and music digital library tool development (2008) 0.07
    0.072365016 = product of:
      0.10854752 = sum of:
        0.0891897 = weight(_text_:objects in 2368) [ClassicSimilarity], result of:
          0.0891897 = score(doc=2368,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.35234275 = fieldWeight in 2368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2368)
        0.019357827 = product of:
          0.038715653 = sum of:
            0.038715653 = weight(_text_:22 in 2368) [ClassicSimilarity], result of:
              0.038715653 = score(doc=2368,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.23214069 = fieldWeight in 2368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2368)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Although there have been a number of fairly recent studies in which researchers have explored the information-seeking and management behaviors of people interacting with musical retrieval systems, there have been very few published studies of the interaction and use behaviors of musicians interacting with their primary information object, the musical score. The ethnographic research reported here seeks to correct this deficiency in the literature. In addition to observing rehearsals and conducting 22 in-depth musician interviews, this research provides in-depth analysis of 25,000 annotations representing 250 parts from 13 complete musical works, made by musicians of all skill levels and performance modes. In addition to producing specific and practical recommendations for digital-library development, this research also provides an augmented annotation framework that will enable more specific study of human-information interaction, both with musical scores, and with more general notational/instructional information objects.
  17. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.07
    0.072365016 = product of:
      0.10854752 = sum of:
        0.0891897 = weight(_text_:objects in 2418) [ClassicSimilarity], result of:
          0.0891897 = score(doc=2418,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.35234275 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.019357827 = product of:
          0.038715653 = sum of:
            0.038715653 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.038715653 = score(doc=2418,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  18. Klas, C.-P.; Fuhr, N.; Schaefer, A.: Evaluating strategic support for information access in the DAFFODIL system (2004) 0.07
    0.072365016 = product of:
      0.10854752 = sum of:
        0.0891897 = weight(_text_:objects in 2419) [ClassicSimilarity], result of:
          0.0891897 = score(doc=2419,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.35234275 = fieldWeight in 2419, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2419)
        0.019357827 = product of:
          0.038715653 = sum of:
            0.038715653 = weight(_text_:22 in 2419) [ClassicSimilarity], result of:
              0.038715653 = score(doc=2419,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.23214069 = fieldWeight in 2419, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2419)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The digital library system Daffodil is targeted at strategic support of users during the information search process. For searching, exploring and managing digital library objects it provides user-customisable information seeking patterns over a federation of heterogeneous digital libraries. In this paper evaluation results with respect to retrieval effectiveness, efficiency and user satisfaction are presented. The analysis focuses on strategic support for the scientific work-flow. Daffodil supports the whole work-flow, from data source selection over information seeking to the representation, organisation and reuse of information. By embedding high level search functionality into the scientific work-flow, the user experiences better strategic system support due to a more systematic work process. These ideas have been implemented in Daffodil followed by a qualitative evaluation. The evaluation has been conducted with 28 participants, ranging from information seeking novices to experts. The results are promising, as they support the chosen model.
    Date
    16.11.2008 16:22:48
  19. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.07
    0.072365016 = product of:
      0.10854752 = sum of:
        0.0891897 = weight(_text_:objects in 2623) [ClassicSimilarity], result of:
          0.0891897 = score(doc=2623,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.35234275 = fieldWeight in 2623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.019357827 = product of:
          0.038715653 = sum of:
            0.038715653 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
              0.038715653 = score(doc=2623,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.23214069 = fieldWeight in 2623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2623)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  20. Kruk, S.R.; Kruk, E.; Stankiewicz, K.: Evaluation of semantic and social technologies for digital libraries (2009) 0.07
    0.072365016 = product of:
      0.10854752 = sum of:
        0.0891897 = weight(_text_:objects in 3387) [ClassicSimilarity], result of:
          0.0891897 = score(doc=3387,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.35234275 = fieldWeight in 3387, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.019357827 = product of:
          0.038715653 = sum of:
            0.038715653 = weight(_text_:22 in 3387) [ClassicSimilarity], result of:
              0.038715653 = score(doc=3387,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.23214069 = fieldWeight in 3387, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3387)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Libraries are the tools we use to learn and to answer our questions. The quality of our work depends, among others, on the quality of the tools we use. Recent research in digital libraries is focused, on one hand on improving the infrastructure of the digital library management systems (DLMS), and on the other on improving the metadata models used to annotate collections of objects maintained by DLMS. The latter includes, among others, the semantic web and social networking technologies. Recently, the semantic web and social networking technologies are being introduced to the digital libraries domain. The expected outcome is that the overall quality of information discovery in digital libraries can be improved by employing social and semantic technologies. In this chapter we present the results of an evaluation of social and semantic end-user information discovery services for the digital libraries.
    Date
    1. 8.2010 12:35:22

Languages

Types

  • a 1261
  • m 160
  • el 90
  • s 60
  • b 26
  • x 15
  • i 8
  • n 3
  • r 2
  • More… Less…

Themes

Subjects

Classifications