Search (441 results, page 1 of 23)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.32
    0.3239617 = product of:
      0.75591063 = sum of:
        0.10798724 = product of:
          0.3239617 = sum of:
            0.3239617 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.3239617 = score(doc=1826,freq=2.0), product of:
                0.34585547 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04079441 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.3239617 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.3239617 = score(doc=1826,freq=2.0), product of:
            0.34585547 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04079441 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.3239617 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.3239617 = score(doc=1826,freq=2.0), product of:
            0.34585547 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04079441 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.42857143 = coord(3/7)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.26
    0.2591694 = product of:
      0.6047286 = sum of:
        0.086389795 = product of:
          0.25916937 = sum of:
            0.25916937 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.25916937 = score(doc=230,freq=2.0), product of:
                0.34585547 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04079441 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.25916937 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.25916937 = score(doc=230,freq=2.0), product of:
            0.34585547 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04079441 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.25916937 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.25916937 = score(doc=230,freq=2.0), product of:
            0.34585547 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04079441 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.42857143 = coord(3/7)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.16
    0.16198085 = product of:
      0.37795532 = sum of:
        0.05399362 = product of:
          0.16198085 = sum of:
            0.16198085 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.16198085 = score(doc=4388,freq=2.0), product of:
                0.34585547 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04079441 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.16198085 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.16198085 = score(doc=4388,freq=2.0), product of:
            0.34585547 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04079441 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.16198085 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.16198085 = score(doc=4388,freq=2.0), product of:
            0.34585547 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04079441 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.42857143 = coord(3/7)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.08
    0.084549874 = product of:
      0.14796227 = sum of:
        0.029183816 = weight(_text_:libraries in 1967) [ClassicSimilarity], result of:
          0.029183816 = score(doc=1967,freq=2.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.2177704 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.0522702 = weight(_text_:case in 1967) [ClassicSimilarity], result of:
          0.0522702 = score(doc=1967,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.29144385 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.043058854 = weight(_text_:studies in 1967) [ClassicSimilarity], result of:
          0.043058854 = score(doc=1967,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.26452032 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.023449413 = product of:
          0.046898827 = sum of:
            0.046898827 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.046898827 = score(doc=1967,freq=4.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.5714286 = coord(4/7)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  5. Faro, S.; Francesconi, E.; Sandrucci, V.: Thesauri KOS analysis and selected thesaurus mapping methodology on the project case-study (2007) 0.08
    0.07632075 = product of:
      0.17808175 = sum of:
        0.09856163 = weight(_text_:case in 2227) [ClassicSimilarity], result of:
          0.09856163 = score(doc=2227,freq=4.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.5495518 = fieldWeight in 2227, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0625 = fieldNorm(doc=2227)
        0.05741181 = weight(_text_:studies in 2227) [ClassicSimilarity], result of:
          0.05741181 = score(doc=2227,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.35269377 = fieldWeight in 2227, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0625 = fieldNorm(doc=2227)
        0.022108318 = product of:
          0.044216637 = sum of:
            0.044216637 = weight(_text_:22 in 2227) [ClassicSimilarity], result of:
              0.044216637 = score(doc=2227,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.30952093 = fieldWeight in 2227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2227)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    - Introduction to the Thesaurus Interoperability problem - Analysis of the thesauri for the project case study - Overview of Schema/Ontology Mapping methodologies - The proposed approach for thesaurus mapping - Standards for implementing the proposed methodology
    Date
    7.11.2008 10:40:22
    Series
    TENDER No 10118 - EUROVOC Studies LOT2
  6. Bailey, C.W. Jr.: Scholarly electronic publishing bibliography (2003) 0.07
    0.070285484 = product of:
      0.16399945 = sum of:
        0.029183816 = weight(_text_:libraries in 1656) [ClassicSimilarity], result of:
          0.029183816 = score(doc=1656,freq=2.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.2177704 = fieldWeight in 1656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=1656)
        0.073921226 = weight(_text_:case in 1656) [ClassicSimilarity], result of:
          0.073921226 = score(doc=1656,freq=4.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.41216385 = fieldWeight in 1656, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=1656)
        0.06089442 = weight(_text_:studies in 1656) [ClassicSimilarity], result of:
          0.06089442 = score(doc=1656,freq=4.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.37408823 = fieldWeight in 1656, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1656)
      0.42857143 = coord(3/7)
    
    Content
    Table of Contents 1 Economic Issues* 2 Electronic Books and Texts 2.1 Case Studies and History 2.2 General Works* 2.3 Library Issues* 3 Electronic Serials 3.1 Case Studies and History 3.2 Critiques 3.3 Electronic Distribution of Printed Journals 3.4 General Works* 3.5 Library Issues* 3.6 Research* 4 General Works* 5 Legal Issues 5.1 Intellectual Property Rights* 5.2 License Agreements 5.3 Other Legal Issues 6 Library Issues 6.1 Cataloging, Identifiers, Linking, and Metadata* 6.2 Digital Libraries* 6.3 General Works* 6.4 Information Integrity and Preservation* 7 New Publishing Models* 8 Publisher Issues 8.1 Digital Rights Management* 9 Repositories and E-Prints* Appendix A. Related Bibliographies by the Same Author Appendix B. About the Author
  7. Haslhofer, B.: Uniform SPARQL access to interlinked (digital library) sources (2007) 0.06
    0.056020144 = product of:
      0.13071367 = sum of:
        0.038911756 = weight(_text_:libraries in 541) [ClassicSimilarity], result of:
          0.038911756 = score(doc=541,freq=2.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.29036054 = fieldWeight in 541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0625 = fieldNorm(doc=541)
        0.069693595 = weight(_text_:case in 541) [ClassicSimilarity], result of:
          0.069693595 = score(doc=541,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.3885918 = fieldWeight in 541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0625 = fieldNorm(doc=541)
        0.022108318 = product of:
          0.044216637 = sum of:
            0.044216637 = weight(_text_:22 in 541) [ClassicSimilarity], result of:
              0.044216637 = score(doc=541,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.30952093 = fieldWeight in 541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=541)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    In this presentation, we therefore focus on a solution for providing uniform access to Digital Libraries and other online services. In order to enable uniform query access to heterogeneous sources, we must provide metadata interoperability in a way that a query language - in this case SPARQL - can cope with the incompatibility of the metadata in various sources without changing their already existing information models.
    Date
    26.12.2011 13:22:46
  8. Fagan, J.C.: Usability studies of faceted browsing : a literature review (2010) 0.04
    0.04488535 = product of:
      0.15709871 = sum of:
        0.03404779 = weight(_text_:libraries in 4396) [ClassicSimilarity], result of:
          0.03404779 = score(doc=4396,freq=2.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.25406548 = fieldWeight in 4396, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4396)
        0.12305093 = weight(_text_:studies in 4396) [ClassicSimilarity], result of:
          0.12305093 = score(doc=4396,freq=12.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.75592977 = fieldWeight in 4396, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4396)
      0.2857143 = coord(2/7)
    
    Abstract
    Faceted browsing is a common feature of new library catalog interfaces. But to what extent does it improve user performance in searching within today's library catalog systems? This article reviews the literature for user studies involving faceted browsing and user studies of "next-generation" library catalogs that incorporate faceted browsing. Both the results and the methods of these studies are analyzed by asking, What do we currently know about faceted browsing? How can we design better studies of faceted browsing in library catalogs? The article proposes methodological considerations for practicing librarians and provides examples of goals, tasks, and measurements for user studies of faceted browsing in library catalogs.
    Source
    Information technology and libraries. 2010, June, S.58-66
  9. Beall, J.: Approaches to expansions : case studies from the German and Vietnamese translations (2003) 0.04
    0.03996796 = product of:
      0.093258575 = sum of:
        0.043558497 = weight(_text_:case in 1748) [ClassicSimilarity], result of:
          0.043558497 = score(doc=1748,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.24286987 = fieldWeight in 1748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1748)
        0.03588238 = weight(_text_:studies in 1748) [ClassicSimilarity], result of:
          0.03588238 = score(doc=1748,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.22043361 = fieldWeight in 1748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1748)
        0.0138177 = product of:
          0.0276354 = sum of:
            0.0276354 = weight(_text_:22 in 1748) [ClassicSimilarity], result of:
              0.0276354 = score(doc=1748,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.19345059 = fieldWeight in 1748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1748)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Object
    DDC-22
  10. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.04
    0.03934368 = product of:
      0.13770288 = sum of:
        0.1045404 = weight(_text_:case in 3895) [ClassicSimilarity], result of:
          0.1045404 = score(doc=3895,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.5828877 = fieldWeight in 3895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.09375 = fieldNorm(doc=3895)
        0.033162475 = product of:
          0.06632495 = sum of:
            0.06632495 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.06632495 = score(doc=3895,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    24. 8.2005 19:20:22
  11. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.03
    0.034443386 = product of:
      0.0803679 = sum of:
        0.050547853 = weight(_text_:libraries in 1184) [ClassicSimilarity], result of:
          0.050547853 = score(doc=1184,freq=24.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.3771894 = fieldWeight in 1184, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1184)
        0.021529427 = weight(_text_:studies in 1184) [ClassicSimilarity], result of:
          0.021529427 = score(doc=1184,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.13226016 = fieldWeight in 1184, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1184)
        0.008290619 = product of:
          0.016581237 = sum of:
            0.016581237 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.016581237 = score(doc=1184,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.116070345 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Google's December 2004 announcement of its intention to collaborate with five major research libraries - Harvard University, the University of Michigan, Stanford University, the University of Oxford, and the New York Public Library - to digitize and surface their print book collections in the Google searching universe has, predictably, stirred conflicting opinion, with some viewing the project as a welcome opportunity to enhance the visibility of library collections in new environments, and others wary of Google's prospective role as gateway to these collections. The project has been vigorously debated on discussion lists and blogs, with the participating libraries commonly referred to as "the Google 5". One point most observers seem to concede is that the questions raised by this initiative are both timely and significant. The Google Print Library Project (GPLP) has galvanized a long overdue, multi-faceted discussion about library print book collections. The print book is core to library identity and practice, but in an era of zero-sum budgeting, it is almost inevitable that print book budgets will decline as budgets for serials, digital resources, and other materials expand. As libraries re-allocate resources to accommodate changing patterns of user needs, print book budgets may be adversely impacted. Of course, the degree of impact will depend on a library's perceived mission. A public library may expect books to justify their shelf-space, with de-accession the consequence of minimal use. A national library, on the other hand, has a responsibility to the scholarly and cultural record and may seek to collect comprehensively within particular areas, with the attendant obligation to secure the long-term retention of its print book collections. The combination of limited budgets, changing user needs, and differences in library collection strategies underscores the need to think about a collective, or system-wide, print book collection - in particular, how can an inter-institutional system be organized to achieve goals that would be difficult, and/or prohibitively expensive, for any one library to undertake individually [4]? Mass digitization programs like GPLP cast new light on these and other issues surrounding the future of library print book collections, but at this early stage, it is light that illuminates only dimly. It will be some time before GPLP's implications for libraries and library print book collections can be fully appreciated and evaluated. But the strong interest and lively debate generated by this initiative suggest that some preliminary analysis - premature though it may be - would be useful, if only to undertake a rough mapping of the terrain over which GPLP potentially will extend. At the least, some early perspective helps shape interesting questions for the future, when the boundaries of GPLP become settled, workflows for producing and managing the digitized materials become systematized, and usage patterns within the GPLP framework begin to emerge.
    This article offers some perspectives on GPLP in light of what is known about library print book collections in general, and those of the Google 5 in particular, from information in OCLC's WorldCat bibliographic database and holdings file. Questions addressed include: * Coverage: What proportion of the system-wide print book collection will GPLP potentially cover? What is the degree of holdings overlap across the print book collections of the five participating libraries? * Language: What is the distribution of languages associated with the print books held by the GPLP libraries? Which languages are predominant? * Copyright: What proportion of the GPLP libraries' print book holdings are out of copyright? * Works: How many distinct works are represented in the holdings of the GPLP libraries? How does a focus on works impact coverage and holdings overlap? * Convergence: What are the effects on coverage of using a different set of five libraries? What are the effects of adding the holdings of additional libraries to those of the GPLP libraries, and how do these effects vary by library type? These questions certainly do not exhaust the analytical possibilities presented by GPLP. More in-depth analysis might look at Google 5 coverage in particular subject areas; it also would be interesting to see how many books covered by the GPLP have already been digitized in other contexts. However, these questions are left to future studies. The purpose here is to explore a few basic questions raised by GPLP, and in doing so, provide an empirical context for the debate that is sure to continue for some time to come. A secondary objective is to lay some groundwork for a general set of questions that could be used to explore the implications of any mass digitization initiative. A suggested list of questions is provided in the conclusion of the article.
    Date
    26.12.2011 14:08:22
  12. El-Ramly, N.; Peterson. R.E.; Volonino, L.: Top ten Web sites using search engines : the case of the desalination industry (1996) 0.03
    0.03342288 = product of:
      0.116980076 = sum of:
        0.073921226 = weight(_text_:case in 945) [ClassicSimilarity], result of:
          0.073921226 = score(doc=945,freq=4.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.41216385 = fieldWeight in 945, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=945)
        0.043058854 = weight(_text_:studies in 945) [ClassicSimilarity], result of:
          0.043058854 = score(doc=945,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.26452032 = fieldWeight in 945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=945)
      0.2857143 = coord(2/7)
    
    Abstract
    The desalination industry involves the desalting of sea or brackish water and achieves the purpose of increasing the worls's effective water supply. There are approximately 4.000 desalination Web sites. The six major Internet search engines were used to determine, according to each of the six, the top twenty sites for desalination. Each site was visited and the 120 gross returns were pared down to the final ten - the 'Top Ten'. The Top Ten were then analyzed to determine what it was that made the sites useful and informative. The major attributes were: a) currency (up-to-date); b) search site capability; c) access to articles on desalination; d) newsletters; e) databases; f) product information; g) online conferencing; h) valuable links to other sites; l) communication links; j) site maps; and k) case studies. Reasons for having a Web site and the current status and prospects for Internet commerce are discussed
  13. Hollink, L.; Assem, M. van; Wang, S.; Isaac, A.; Schreiber, G.: Two variations on ontology alignment evaluation : methodological issues (2008) 0.03
    0.03342288 = product of:
      0.116980076 = sum of:
        0.073921226 = weight(_text_:case in 4645) [ClassicSimilarity], result of:
          0.073921226 = score(doc=4645,freq=4.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.41216385 = fieldWeight in 4645, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4645)
        0.043058854 = weight(_text_:studies in 4645) [ClassicSimilarity], result of:
          0.043058854 = score(doc=4645,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.26452032 = fieldWeight in 4645, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4645)
      0.2857143 = coord(2/7)
    
    Abstract
    Evaluation of ontology alignments is in practice done in two ways: (1) assessing individual correspondences and (2) comparing the alignment to a reference alignment. However, this type of evaluation does not guarantee that an application which uses the alignment will perform well. In this paper, we contribute to the current ontology alignment evaluation practices by proposing two alternative evaluation methods that take into account some characteristics of a usage scenario without doing a full-fledged end-to-end evaluation. We compare different evaluation approaches in three case studies, focussing on methodological issues. Each case study considers an alignment between a different pair of ontologies, ranging from rich and well-structured to small and poorly structured. This enables us to conclude on the use of different evaluation approaches in different settings.
  14. Mixter, J.; Childress, E.R.: FAST (Faceted Application of Subject Terminology) users : summary and case studies (2013) 0.03
    0.032098964 = product of:
      0.112346366 = sum of:
        0.061601017 = weight(_text_:case in 2011) [ClassicSimilarity], result of:
          0.061601017 = score(doc=2011,freq=4.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.34346986 = fieldWeight in 2011, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2011)
        0.050745346 = weight(_text_:studies in 2011) [ClassicSimilarity], result of:
          0.050745346 = score(doc=2011,freq=4.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.3117402 = fieldWeight in 2011, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2011)
      0.2857143 = coord(2/7)
    
    Abstract
    This document presents: a brief overview of FAST; a brief analysis of common characteristics of parties that have either chosen to adopt FAST or chosen against using FAST; suggested improvements for FAST vocabulary and services; tables summarizing FAST adopters and non-adopters; and sixteen individual "case studies" presented as edited write-ups of interviews.
  15. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.03
    0.030503238 = product of:
      0.07117422 = sum of:
        0.014591908 = weight(_text_:libraries in 1202) [ClassicSimilarity], result of:
          0.014591908 = score(doc=1202,freq=2.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.1088852 = fieldWeight in 1202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1202)
        0.0261351 = weight(_text_:case in 1202) [ClassicSimilarity], result of:
          0.0261351 = score(doc=1202,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.14572193 = fieldWeight in 1202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1202)
        0.03044721 = weight(_text_:studies in 1202) [ClassicSimilarity], result of:
          0.03044721 = score(doc=1202,freq=4.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.18704411 = fieldWeight in 1202, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1202)
      0.42857143 = coord(3/7)
    
    Abstract
    The concept of c-space is proposed as a visualization schema relating containers of content to cataloging surrogates and classification structures. Possible applications of keyword vector clusters within c-space could include improved retrieval rates through the use of captioning within visual hierarchies, tracings of semantic bleeding among subclasses, and access to buried knowledge within subject-neutral publication containers. The Scholastica Project is described as one example, following a tradition of research dating back to the 1980's. Preliminary focus group assessment indicates that this type of classification rendering may offer digital library searchers enriched entry strategies and an expanded range of re-entry vocabularies. Those of us who work in traditional libraries typically assume that our systems of classification: Library of Congress Classification (LCC) and Dewey Decimal Classification (DDC), are descriptive rather than prescriptive. In other words, LCC classes and subclasses approximate natural groupings of texts that reflect an underlying order of knowledge, rather than arbitrary categories prescribed by librarians to facilitate efficient shelving. Philosophical support for this assumption has traditionally been found in a number of places, from the archetypal tree of knowledge, to Aristotelian categories, to the concept of discursive formations proposed by Michel Foucault. Gary P. Radford has elegantly described an encounter with Foucault's discursive formations in the traditional library setting: "Just by looking at the titles on the spines, you can see how the books cluster together...You can identify those books that seem to form the heart of the discursive formation and those books that reside on the margins. Moving along the shelves, you see those books that tend to bleed over into other classifications and that straddle multiple discursive formations. You can physically and sensually experience...those points that feel like state borders or national boundaries, those points where one subject ends and another begins, or those magical places where one subject has morphed into another..."
    But what happens to this awareness in a digital library? Can discursive formations be represented in cyberspace, perhaps through diagrams in a visualization interface? And would such a schema be helpful to a digital library user? To approach this question, it is worth taking a moment to reconsider what Radford is looking at. First, he looks at titles to see how the books cluster. To illustrate, I scanned one hundred books on the shelves of a college library under subclass HT 101-395, defined by the LCC subclass caption as Urban groups. The City. Urban sociology. Of the first 100 titles in this sequence, fifty included the word "urban" or variants (e.g. "urbanization"). Another thirty-five used the word "city" or variants. These keywords appear to mark their titles as the heart of this discursive formation. The scattering of titles not using "urban" or "city" used related terms such as "town," "community," or in one case "skyscrapers." So we immediately see some empirical correlation between keywords and classification. But we also see a problem with the commonly used search technique of title-keyword. A student interested in urban studies will want to know about this entire subclass, and may wish to browse every title available therein. A title-keyword search on "urban" will retrieve only half of the titles, while a search on "city" will retrieve just over a third. There will be no overlap, since no titles in this sample contain both words. The only place where both words appear in a common string is in the LCC subclass caption, but captions are not typically indexed in library Online Public Access Catalogs (OPACs). In a traditional library, this problem is mitigated when the student goes to the shelf looking for any one of the books and suddenly discovers a much wider selection than the keyword search had led him to expect. But in a digital library, the issue of non-retrieval can be more problematic, as studies have indicated. Micco and Popp reported that, in a study funded partly by the U.S. Department of Education, 65 of 73 unskilled users searching for material on U.S./Soviet foreign relations found some material but never realized they had missed a large percentage of what was in the database.
  16. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.03
    0.027236873 = product of:
      0.09532905 = sum of:
        0.0522702 = weight(_text_:case in 4640) [ClassicSimilarity], result of:
          0.0522702 = score(doc=4640,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.29144385 = fieldWeight in 4640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
        0.043058854 = weight(_text_:studies in 4640) [ClassicSimilarity], result of:
          0.043058854 = score(doc=4640,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.26452032 = fieldWeight in 4640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
      0.2857143 = coord(2/7)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
  17. Assem, M. van; Malaisé, V.; Miles, A.; Schreiber, G.: ¬A method to convert thesauri to SKOS (2006) 0.03
    0.027236873 = product of:
      0.09532905 = sum of:
        0.0522702 = weight(_text_:case in 4642) [ClassicSimilarity], result of:
          0.0522702 = score(doc=4642,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.29144385 = fieldWeight in 4642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
        0.043058854 = weight(_text_:studies in 4642) [ClassicSimilarity], result of:
          0.043058854 = score(doc=4642,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.26452032 = fieldWeight in 4642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4642)
      0.2857143 = coord(2/7)
    
    Abstract
    Thesauri can be useful resources for indexing and retrieval on the Semantic Web, but often they are not published in RDF/OWL. To convert thesauri to RDF for use in Semantic Web applications and to ensure the quality and utility of the conversion a structured method is required. Moreover, if different thesauri are to be interoperable without complicated mappings, a standard schema for thesauri is required. This paper presents a method for conversion of thesauri to the SKOS RDF/OWL schema, which is a proposal for such a standard under development by W3Cs Semantic Web Best Practices Working Group. We apply the method to three thesauri: IPSV, GTAA and MeSH. With these case studies we evaluate our method and the applicability of SKOS for representing thesauri.
  18. Wongthontham, P.; Abu-Salih, B.: Ontology-based approach for semantic data extraction from social big data : state-of-the-art and research directions (2018) 0.03
    0.027236873 = product of:
      0.09532905 = sum of:
        0.0522702 = weight(_text_:case in 4097) [ClassicSimilarity], result of:
          0.0522702 = score(doc=4097,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.29144385 = fieldWeight in 4097, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4097)
        0.043058854 = weight(_text_:studies in 4097) [ClassicSimilarity], result of:
          0.043058854 = score(doc=4097,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.26452032 = fieldWeight in 4097, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4097)
      0.2857143 = coord(2/7)
    
    Abstract
    A challenge of managing and extracting useful knowledge from social media data sources has attracted much attention from academic and industry. To address this challenge, semantic analysis of textual data is focused in this paper. We propose an ontology-based approach to extract semantics of textual data and define the domain of data. In other words, we semantically analyse the social data at two levels i.e. the entity level and the domain level. We have chosen Twitter as a social channel challenge for a purpose of concept proof. Domain knowledge is captured in ontologies which are then used to enrich the semantics of tweets provided with specific semantic conceptual representation of entities that appear in the tweets. Case studies are used to demonstrate this approach. We experiment and evaluate our proposed approach with a public dataset collected from Twitter and from the politics domain. The ontology-based approach leverages entity extraction and concept mappings in terms of quantity and accuracy of concept identification.
  19. Combs, A.; Krippner, S.: Collective consciousness and the social brain (2008) 0.03
    0.027236873 = product of:
      0.09532905 = sum of:
        0.0522702 = weight(_text_:case in 5622) [ClassicSimilarity], result of:
          0.0522702 = score(doc=5622,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.29144385 = fieldWeight in 5622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=5622)
        0.043058854 = weight(_text_:studies in 5622) [ClassicSimilarity], result of:
          0.043058854 = score(doc=5622,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.26452032 = fieldWeight in 5622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=5622)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper discusses supportive neurological and social evidence for 'collective consciousness', here understood as a shared sense of being together with others in a single or unified experience. Mirror neurons in the premotor and posterior parietal cortices respond to the intentions as well as the actions of other individuals. There are also mirror neurons in the anterior insula and anterior cingulate cortices which have been implicated in empathy. Many authors have considered the likely role of such mirror systems in the development of uniquely human aspects of sociality including language. Though not without criticism, Menant has made the case that mirror-neuron assisted exchanges aided the original advent of self-consciousness and intersubjectivity. Combining these ideas with social mirror theory it is not difficult to imagine the creation of similar dynamical patterns in the emotional and even cognitive neuronal activity of individuals in human groups, creating a feeling in which the participating members experience a unified sense of consciousness. Such instances pose a kind of 'binding problem' in which participating individuals exhibit a degree of 'entanglement'.
    Source
    Journal of consciousness studies. 15(2008) no.10-11, S.264-276
  20. Schaefer, A.; Jordan, M.; Klas, C.-P.; Fuhr, N.: Active support for query formulation in virtual digital libraries : a case study with DAFFODIL (2005) 0.02
    0.024480488 = product of:
      0.08568171 = sum of:
        0.042123213 = weight(_text_:libraries in 4296) [ClassicSimilarity], result of:
          0.042123213 = score(doc=4296,freq=6.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.3143245 = fieldWeight in 4296, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4296)
        0.043558497 = weight(_text_:case in 4296) [ClassicSimilarity], result of:
          0.043558497 = score(doc=4296,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.24286987 = fieldWeight in 4296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4296)
      0.2857143 = coord(2/7)
    
    Abstract
    Daffodil is a front-end to federated, heterogeneous digital libraries targeting at strategic support of users during the information seeking process. This is done by offering a variety of functions for searching, exploring and managing digital library objects. However, the distributed search increases response time and the conceptual model of the underlying search processes is inherently weaker. This makes query formulation harder and the resulting waiting times can be frustrating. In this paper, we investigate the concept of proactive support during the user's query formulation. For improving user efficiency and satisfaction, we implemented annotations, proactive support and error markers on the query form itself. These functions decrease the probability for syntactical or semantical errors in queries. Furthermore, the user is able to make better tactical decisions and feels more confident that the system handles the query properly. Evaluations with 30 subjects showed that user satisfaction is improved, whereas no conclusive results were received for efficiency.
    Source
    Research and advanced technology for digital libraries : 9th European conference, ECDL 2005, Vienna, Austria, September 18-23, 2005 ; proceedings / Andreas Rauber ... (eds.)

Years

Languages

Types

  • a 234
  • s 14
  • i 12
  • m 8
  • r 8
  • p 5
  • b 4
  • x 4
  • n 1
  • More… Less…

Themes