Search (2621 results, page 1 of 132)

  • × language_ss:"e"
  1. Nicholas, D.: Assessing information needs : tools and techniques (1996) 0.16
    0.15523648 = product of:
      0.31047297 = sum of:
        0.31047297 = sum of:
          0.24171291 = weight(_text_:assessment in 5941) [ClassicSimilarity], result of:
            0.24171291 = score(doc=5941,freq=4.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.86265934 = fieldWeight in 5941, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.078125 = fieldNorm(doc=5941)
          0.06876006 = weight(_text_:22 in 5941) [ClassicSimilarity], result of:
            0.06876006 = score(doc=5941,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.38690117 = fieldWeight in 5941, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5941)
      0.5 = coord(1/2)
    
    Date
    26. 2.2008 19:22:51
    LCSH
    Needs assessment
    Subject
    Needs assessment
  2. ¬The reference assessment manual (1995) 0.12
    0.11983845 = product of:
      0.2396769 = sum of:
        0.2396769 = sum of:
          0.17091684 = weight(_text_:assessment in 2996) [ClassicSimilarity], result of:
            0.17091684 = score(doc=2996,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.60999227 = fieldWeight in 2996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.078125 = fieldNorm(doc=2996)
          0.06876006 = weight(_text_:22 in 2996) [ClassicSimilarity], result of:
            0.06876006 = score(doc=2996,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.38690117 = fieldWeight in 2996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2996)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: College and research libraries. 57(1996) no.3, S.307-308 (M. Crist); Journal of academic librarianship 22(1996) no.4, S.314 (D. Ettinger)
  3. Tillema, H.: Development of potential : realizing development centres in organizations (1996) 0.11
    0.10943902 = product of:
      0.21887805 = sum of:
        0.21887805 = sum of:
          0.177622 = weight(_text_:assessment in 911) [ClassicSimilarity], result of:
            0.177622 = score(doc=911,freq=6.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.63392264 = fieldWeight in 911, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.046875 = fieldNorm(doc=911)
          0.041256037 = weight(_text_:22 in 911) [ClassicSimilarity], result of:
            0.041256037 = score(doc=911,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.23214069 = fieldWeight in 911, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=911)
      0.5 = coord(1/2)
    
    Abstract
    Are organizations interested in realizing the potential of their personnel? How far have they progressed in utilizing performance assessment instruments for developmental purposes? There is a growing need for redirecting organizations toward greater knowledge productivity, and using personnel's competencies in a knowledge productive way. Development centers haue the potential of analyzing and diagnosing relevant competencies of personnel while at the same time providing a match wich further development. It was studied, within a representative set of large Dutch organizations, already familiar with the concept of assessment centers, to what degree management conceptions and actual implementation conditions are present for the introduction of development centers. The advantages of development centers as a knowledgeproductive tool for assessment in organizations are elaborated.
    Source
    Knowledge management: organization competence and methodolgy. Proceedings of the Fourth International ISMICK Symposium, 21-22 October 1996, Netherlands. Ed.: J.F. Schreinemakers
  4. Meadow, C.T.: ¬A proposed method of measuring the utility of individual information retrieval tools (1996) 0.11
    0.10866555 = product of:
      0.2173311 = sum of:
        0.2173311 = sum of:
          0.16919905 = weight(_text_:assessment in 6611) [ClassicSimilarity], result of:
            0.16919905 = score(doc=6611,freq=4.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.6038616 = fieldWeight in 6611, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0546875 = fieldNorm(doc=6611)
          0.048132043 = weight(_text_:22 in 6611) [ClassicSimilarity], result of:
            0.048132043 = score(doc=6611,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.2708308 = fieldWeight in 6611, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=6611)
      0.5 = coord(1/2)
    
    Abstract
    Proposes a new method of evaluating information retrieval systems by concentrating on individual tools in the context of their use, rather than systems as a whole. A tool is a command, its menu or graphic interface equivalent, or a move or stratagem. A user would render an assessment of the relative success of a small part of a search, and every tool used in that part would be credited with a contribution to the result, whether positive or negative. The cumulative scores would provide an assessment of the overall utility of the tool
    Source
    Canadian journal of information and library science. 21(1996) no.1, S.22-34
  5. Bertot, J.C.; McClure, C.R.: Developing assessment techniques for statewide electronic networks (1996) 0.11
    0.10866555 = product of:
      0.2173311 = sum of:
        0.2173311 = sum of:
          0.16919905 = weight(_text_:assessment in 2173) [ClassicSimilarity], result of:
            0.16919905 = score(doc=2173,freq=4.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.6038616 = fieldWeight in 2173, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2173)
          0.048132043 = weight(_text_:22 in 2173) [ClassicSimilarity], result of:
            0.048132043 = score(doc=2173,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.2708308 = fieldWeight in 2173, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2173)
      0.5 = coord(1/2)
    
    Abstract
    Reports on a study assessing statewide electronic network initiatives using the Maryland Sailor network as a case study. Aims to develop assessment techniques and indicators for the evaluation of statewide electronic networks. Defines key components of the statewide networked environment. Develops and operationalizes performance measures for networked information technologies and services provided through statewide networks. Explores several methods of evaluating statewide electronic networks. Identifies and discusses key issues and preliminary findings that affect the successful evaluation of statewide networked services
    Date
    7.11.1998 20:27:22
  6. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10123343 = sum of:
      0.08060541 = product of:
        0.24181622 = sum of:
          0.24181622 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24181622 = score(doc=562,freq=2.0), product of:
              0.43026417 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.050750602 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020628018 = product of:
        0.041256037 = sum of:
          0.041256037 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041256037 = score(doc=562,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  7. Faro, S.; Francesconi, E.; Marinai, E.; Sandrucci, V.: Report on execution and results of the interoperability tests (2008) 0.10
    0.09587076 = product of:
      0.19174153 = sum of:
        0.19174153 = sum of:
          0.13673347 = weight(_text_:assessment in 7411) [ClassicSimilarity], result of:
            0.13673347 = score(doc=7411,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.4879938 = fieldWeight in 7411, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0625 = fieldNorm(doc=7411)
          0.05500805 = weight(_text_:22 in 7411) [ClassicSimilarity], result of:
            0.05500805 = score(doc=7411,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.30952093 = fieldWeight in 7411, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=7411)
      0.5 = coord(1/2)
    
    Abstract
    - Formal characterization given to the thesaurus mapping problem - Interopearbility workflow - - Thesauri SKOS Core transformation - - Thesaurus Mapping algorithms implementation - The "gold standard" data set and the THALEN application - Thesaurus interoperability assessment measures - Experimental results
    Date
    7.11.2008 10:40:22
  8. Ashton, J.: ONE: the final OPAC frontier (1998) 0.10
    0.09587076 = product of:
      0.19174153 = sum of:
        0.19174153 = sum of:
          0.13673347 = weight(_text_:assessment in 2588) [ClassicSimilarity], result of:
            0.13673347 = score(doc=2588,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.4879938 = fieldWeight in 2588, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0625 = fieldNorm(doc=2588)
          0.05500805 = weight(_text_:22 in 2588) [ClassicSimilarity], result of:
            0.05500805 = score(doc=2588,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.30952093 = fieldWeight in 2588, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2588)
      0.5 = coord(1/2)
    
    Abstract
    Describes the European Commission's OPAC Network in Europe (ONE) project which attempts to make it simpler to search a number of major European OPACs crossing all frontiers via online interface. Explains how this is done and the British Library's involvement in it, assessment of the project and plans for the future
    Source
    Select newsletter. 1998, no.22, Spring, S.5-6
  9. Jiang, Z.; Gu, Q.; Yin, Y.; Wang, J.; Chen, D.: GRAW+ : a two-view graph propagation method with word coupling for readability assessment (2019) 0.09
    0.09119919 = product of:
      0.18239838 = sum of:
        0.18239838 = sum of:
          0.14801835 = weight(_text_:assessment in 5218) [ClassicSimilarity], result of:
            0.14801835 = score(doc=5218,freq=6.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.5282689 = fieldWeight in 5218, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5218)
          0.03438003 = weight(_text_:22 in 5218) [ClassicSimilarity], result of:
            0.03438003 = score(doc=5218,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.19345059 = fieldWeight in 5218, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5218)
      0.5 = coord(1/2)
    
    Abstract
    Existing methods for readability assessment usually construct inductive classification models to assess the readability of singular text documents based on extracted features, which have been demonstrated to be effective. However, they rarely make use of the interrelationship among documents on readability, which can help increase the accuracy of readability assessment. In this article, we adopt a graph-based classification method to model and utilize the relationship among documents using the coupled bag-of-words model. We propose a word coupling method to build the coupled bag-of-words model by estimating the correlation between words on reading difficulty. In addition, we propose a two-view graph propagation method to make use of both the coupled bag-of-words model and the linguistic features. Our method employs a graph merging operation to combine graphs built according to different views, and improves the label propagation by incorporating the ordinal relation among reading levels. Experiments were conducted on both English and Chinese data sets, and the results demonstrate both effectiveness and potential of the method.
    Date
    15. 4.2019 13:46:22
  10. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.08
    0.083886914 = product of:
      0.16777383 = sum of:
        0.16777383 = sum of:
          0.11964179 = weight(_text_:assessment in 1319) [ClassicSimilarity], result of:
            0.11964179 = score(doc=1319,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.4269946 = fieldWeight in 1319, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1319)
          0.048132043 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
            0.048132043 = score(doc=1319,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.2708308 = fieldWeight in 1319, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1319)
      0.5 = coord(1/2)
    
    Abstract
    Keyword based querying has been an immediate and efficient way to specify and retrieve related information that the user inquired. However, conventional document ranking based on an automatic assessment of document relevance to the query may not be the best approach when little information is given. Proposes an idea to integrate 2 existing techniques, query expansion and relevance feedback to achieve a concept-based information search for the Web
    Date
    1. 8.1996 22:08:06
  11. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.08
    0.083886914 = product of:
      0.16777383 = sum of:
        0.16777383 = sum of:
          0.11964179 = weight(_text_:assessment in 5483) [ClassicSimilarity], result of:
            0.11964179 = score(doc=5483,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.4269946 = fieldWeight in 5483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5483)
          0.048132043 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
            0.048132043 = score(doc=5483,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.2708308 = fieldWeight in 5483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5483)
      0.5 = coord(1/2)
    
    Abstract
    This paper gives an outline of the final results of the TransRouter project. In the scope of this project a decision support system for translation managers has been developed, which will support the selection of appropriate routes for translation projects. In this paper emphasis is put on the decision model, which is based on a stepwise refined assessment of translation routes. The workflow of using this system is considered as well
    Date
    10.12.2000 18:22:35
  12. Striedieck, S.: Online catalog maintenance : the OOPS command in LIAS (1985) 0.08
    0.083886914 = product of:
      0.16777383 = sum of:
        0.16777383 = sum of:
          0.11964179 = weight(_text_:assessment in 366) [ClassicSimilarity], result of:
            0.11964179 = score(doc=366,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.4269946 = fieldWeight in 366, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0546875 = fieldNorm(doc=366)
          0.048132043 = weight(_text_:22 in 366) [ClassicSimilarity], result of:
            0.048132043 = score(doc=366,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.2708308 = fieldWeight in 366, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=366)
      0.5 = coord(1/2)
    
    Abstract
    LIAS, the Pennsylvania State University's (Penn State) integrated interactive online system, provides for messaging by the user to inform library staff of errors found in bibliographic records. The message is sent by use of the OOPS command, and results in a printout which is used by processing staff for online catalog maintenance. This article describes LIAS, the use of the OOPS command, the processing of the resulting OOPS reports, an assessment of the effect of its use, and some speculation on the expansion of the LIAS message system for use in catalog maintenance.
    Date
    7. 1.2007 13:22:30
  13. Stalberg, E.; Cronin, C.: Assessing the cost and value of bibliographic control (2011) 0.08
    0.083886914 = product of:
      0.16777383 = sum of:
        0.16777383 = sum of:
          0.11964179 = weight(_text_:assessment in 2592) [ClassicSimilarity], result of:
            0.11964179 = score(doc=2592,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.4269946 = fieldWeight in 2592, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2592)
          0.048132043 = weight(_text_:22 in 2592) [ClassicSimilarity], result of:
            0.048132043 = score(doc=2592,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.2708308 = fieldWeight in 2592, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2592)
      0.5 = coord(1/2)
    
    Abstract
    In June 2009, the Association for Library Collections and Technical Services Heads of Technical Services in Large Research Libraries Interest Group established the Task Force on Cost/Value Assessment of Bibliographic Control to address recommendation 5.1.1.1 of On the Record: Report of the Library of Congress Working Group on the Future of Bibliographic Control, which focused on developing measures for costs, benefits, and value of bibliographic control. This paper outlines results of that task force's efforts to develop and articulate metrics for evaluating the cost and value of cataloging activities specifically, and offers some next steps that the community could take to further the profession's collective understanding of the costs and values associated with bibliographic control.
    Date
    10. 9.2000 17:38:22
  14. Mugridge, R.L.; Edmunds, J.: Batchloading MARC bibliographic records (2012) 0.08
    0.083886914 = product of:
      0.16777383 = sum of:
        0.16777383 = sum of:
          0.11964179 = weight(_text_:assessment in 2600) [ClassicSimilarity], result of:
            0.11964179 = score(doc=2600,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.4269946 = fieldWeight in 2600, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2600)
          0.048132043 = weight(_text_:22 in 2600) [ClassicSimilarity], result of:
            0.048132043 = score(doc=2600,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.2708308 = fieldWeight in 2600, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2600)
      0.5 = coord(1/2)
    
    Abstract
    Research libraries are using batchloading to provide access to many resources that they would otherwise be unable to catalog given the staff and other resources available. To explore how such libraries are managing their batchloading activities, the authors conducted a survey of the Association for Library Collections and Technical Services Directors of Large Research Libraries Interest Group member libraries. The survey addressed staffing, budgets, scope, workflow, management, quality standards, information technology support, collaborative efforts, and assessment of batchloading activities. The authors provide an analysis of the survey results along with suggestions for process improvements and future research.
    Date
    10. 9.2000 17:38:22
  15. Raan, A.F.J. van: Statistical properties of bibliometric indicators : research group indicator distributions and correlations (2006) 0.08
    0.08044747 = product of:
      0.16089495 = sum of:
        0.16089495 = sum of:
          0.1025501 = weight(_text_:assessment in 5275) [ClassicSimilarity], result of:
            0.1025501 = score(doc=5275,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.36599535 = fieldWeight in 5275, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.046875 = fieldNorm(doc=5275)
          0.05834485 = weight(_text_:22 in 5275) [ClassicSimilarity], result of:
            0.05834485 = score(doc=5275,freq=4.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.32829654 = fieldWeight in 5275, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5275)
      0.5 = coord(1/2)
    
    Abstract
    In this article we present an empirical approach to the study of the statistical properties of bibliometric indicators on a very relevant but not simply available aggregation level: the research group. We focus on the distribution functions of a coherent set of indicators that are used frequently in the analysis of research performance. In this sense, the coherent set of indicators acts as a measuring instrument. Better insight into the statistical properties of a measuring instrument is necessary to enable assessment of the instrument itself. The most basic distribution in bibliometric analysis is the distribution of citations over publications, and this distribution is very skewed. Nevertheless, we clearly observe the working of the central limit theorem and find that at the level of research groups the distribution functions of the main indicators, particularly the journal- normalized and the field-normalized indicators, approach normal distributions. The results of our study underline the importance of the idea of group oeuvre, that is, the role of sets of related publications as a unit of analysis.
    Date
    22. 7.2006 16:20:22
  16. Pal, S.; Mitra, M.; Kamps, J.: Evaluation effort, reliability and reusability in XML retrieval (2011) 0.08
    0.07761824 = product of:
      0.15523648 = sum of:
        0.15523648 = sum of:
          0.12085646 = weight(_text_:assessment in 4197) [ClassicSimilarity], result of:
            0.12085646 = score(doc=4197,freq=4.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.43132967 = fieldWeight in 4197, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4197)
          0.03438003 = weight(_text_:22 in 4197) [ClassicSimilarity], result of:
            0.03438003 = score(doc=4197,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.19345059 = fieldWeight in 4197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4197)
      0.5 = coord(1/2)
    
    Abstract
    The Initiative for the Evaluation of XML retrieval (INEX) provides a TREC-like platform for evaluating content-oriented XML retrieval systems. Since 2007, INEX has been using a set of precision-recall based metrics for its ad hoc tasks. The authors investigate the reliability and robustness of these focused retrieval measures, and of the INEX pooling method. They explore four specific questions: How reliable are the metrics when assessments are incomplete, or when query sets are small? What is the minimum pool/query-set size that can be used to reliably evaluate systems? Can the INEX collections be used to fairly evaluate "new" systems that did not participate in the pooling process? And, for a fixed amount of assessment effort, would this effort be better spent in thoroughly judging a few queries, or in judging many queries relatively superficially? The authors' findings validate properties of precision-recall-based metrics observed in document retrieval settings. Early precision measures are found to be more error-prone and less stable under incomplete judgments and small topic-set sizes. They also find that system rankings remain largely unaffected even when assessment effort is substantially (but systematically) reduced, and confirm that the INEX collections remain usable when evaluating nonparticipating systems. Finally, they observe that for a fixed amount of effort, judging shallow pools for many queries is better than judging deep pools for a smaller set of queries. However, when judging only a random sample of a pool, it is better to completely judge fewer topics than to partially judge many topics. This result confirms the effectiveness of pooling methods.
    Date
    22. 1.2011 14:20:56
  17. Corts Mendes, L.; Pacini de Moura, A.: Documentation as knowledge organization : an assessment of Paul Otlet's proposals (2014) 0.08
    0.07761824 = product of:
      0.15523648 = sum of:
        0.15523648 = sum of:
          0.12085646 = weight(_text_:assessment in 1471) [ClassicSimilarity], result of:
            0.12085646 = score(doc=1471,freq=4.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.43132967 = fieldWeight in 1471, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1471)
          0.03438003 = weight(_text_:22 in 1471) [ClassicSimilarity], result of:
            0.03438003 = score(doc=1471,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.19345059 = fieldWeight in 1471, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1471)
      0.5 = coord(1/2)
    
    Abstract
    This paper proposes an assessment of Paul Otlet's Documentation anchored in Birger Hjørland's argument that the field of Knowledge Organization (KO) must be formed by two interdependent views, a broad conception on how knowledge is socially and intellectually produced and organized, and a narrow view that deals with the organization of the documents that register knowledge. Otlet's conceptions of individual and collective knowledge are addressed, as well as the role of documents in its conservation and communication, in order to show how the intended universal application of Documentation's principles and methods was supposed to make registered knowledge easily accessible and clearly apprehended as a unified whole. It concludes that Otlet's Documentation fulfils in its own context the requirement claimed by Hjørland for the KO field of narrow conceptions being sustained by broader views of the organization of knowledge, and therefore qualifies itself as a historical component of KO, being capable of contributing as such to its epistemological and theoretical discussions.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  18. Chaudhry, A.S.; Ashoor, S: Functional performance of automated systems : a comparative study of HORIZON, INNOPAC and VTLS (1998) 0.07
    0.071903065 = product of:
      0.14380613 = sum of:
        0.14380613 = sum of:
          0.1025501 = weight(_text_:assessment in 3022) [ClassicSimilarity], result of:
            0.1025501 = score(doc=3022,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.36599535 = fieldWeight in 3022, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.046875 = fieldNorm(doc=3022)
          0.041256037 = weight(_text_:22 in 3022) [ClassicSimilarity], result of:
            0.041256037 = score(doc=3022,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.23214069 = fieldWeight in 3022, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3022)
      0.5 = coord(1/2)
    
    Abstract
    Provides functional performance data drawn from an analysis of the capabilities and functionality of 3 major library automation systems: HORIZON, INNOPAC and Virginia Tech Library System (VTLS). The assessment was based on vendor input as well as on feedback from libraries of different types from different parts of the world. Objective criteria based on a numerical scoring scheme was used to assess system performance in 6 major functional areas: acquisition; cataloguing; circulation; OPAC; reference and information services; and serials control. The functional performance data is expected to be useful for libraries loking for new systems as well as those already computerised and interested in enhancing their present systems. In addition, data on the extent of the utilisation of system capabilities by libraries should also be of interest to system vendors
    Date
    22. 2.1999 14:03:24
  19. Li, X.: Designing an interactive Web tutorial with cross-browser dynamic HTML (2000) 0.07
    0.071903065 = product of:
      0.14380613 = sum of:
        0.14380613 = sum of:
          0.1025501 = weight(_text_:assessment in 4897) [ClassicSimilarity], result of:
            0.1025501 = score(doc=4897,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.36599535 = fieldWeight in 4897, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.046875 = fieldNorm(doc=4897)
          0.041256037 = weight(_text_:22 in 4897) [ClassicSimilarity], result of:
            0.041256037 = score(doc=4897,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.23214069 = fieldWeight in 4897, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4897)
      0.5 = coord(1/2)
    
    Abstract
    Texas A&M University Libraries developed a Web-based training (WBT) application for LandView III, a federal depository CD-ROM publication using cross-browser dynamic HTML (DHTML) and other Web technologies. The interactive and self-paced tutorial demonstrates the major features of the CD-ROM and shows how to navigate the programs. The tutorial features dynamic HTML techniques, such as hiding, showing and moving layers; dragging objects; and windows-style drop-down menus. It also integrates interactive forms, common gateway interface (CGI), frames, and animated GIF images in the design of the WBT. After describing the design and implementation of the tutorial project, an evaluation of usage statistics and user feedback was conducted, as well as an assessment of its strengths and weaknesses, and a comparison of this tutorial with other common types of training methods. The present article describes an innovative approach for CD-ROM training using advanced Web technologies such as dynamic HTML, which can simulate and demonstrate the interactive use of the CD-ROM, as well as the actual search process of a database.
    Date
    28. 1.2006 19:21:22
  20. Margaritopoulos, T.; Margaritopoulos, M.; Mavridis, I.; Manitsaris, A.: ¬A conceptual framework for metadata quality assessment (2008) 0.07
    0.071903065 = product of:
      0.14380613 = sum of:
        0.14380613 = sum of:
          0.1025501 = weight(_text_:assessment in 2643) [ClassicSimilarity], result of:
            0.1025501 = score(doc=2643,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.36599535 = fieldWeight in 2643, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.046875 = fieldNorm(doc=2643)
          0.041256037 = weight(_text_:22 in 2643) [ClassicSimilarity], result of:
            0.041256037 = score(doc=2643,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.23214069 = fieldWeight in 2643, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2643)
      0.5 = coord(1/2)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas

Languages

Types

  • a 2307
  • m 180
  • s 114
  • el 85
  • b 34
  • r 14
  • x 8
  • p 4
  • i 3
  • n 2
  • h 1
  • More… Less…

Themes

Subjects

Classifications