Search (2489 results, page 1 of 125)

  • × language_ss:"e"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.17
    0.17440404 = product of:
      0.34880808 = sum of:
        0.29797146 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
          0.29797146 = score(doc=562,freq=2.0), product of:
            0.5301813 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.062536046 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.050836623 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
          0.050836623 = score(doc=562,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.23214069 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Simpson, B.; Williams, P.: ¬The cataloger's workstation revisited : utilizing cataloger's desktop (2001) 0.15
    0.1484918 = product of:
      0.2969836 = sum of:
        0.2376742 = weight(_text_:assess in 4121) [ClassicSimilarity], result of:
          0.2376742 = score(doc=4121,freq=4.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.64474034 = fieldWeight in 4121, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
        0.059309397 = weight(_text_:22 in 4121) [ClassicSimilarity], result of:
          0.059309397 = score(doc=4121,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.2708308 = fieldWeight in 4121, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
      0.5 = coord(2/4)
    
    Abstract
    A few years into the development of Cataloger's Desktop, an electronic cataloging tool aggregator available through the Library of Congress, is an opportune time to assess its impact on cataloging operations. A search for online cataloging tools on the Internet indicates a proliferation of cataloging tool aggregators which provide access to online documentation related to cataloging practices and procedures. Cataloger's Desktop stands out as a leader among these aggregators. Results of a survey to assess 159 academic ARL and large public libraries' reasons for use or non-use of Cataloger's Desktop highlight the necessity of developing strategies for its successful implementation including training staff, providing documentation, and managing technical issues.
    Date
    28. 7.2006 20:09:22
  3. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.15
    0.1484918 = product of:
      0.2969836 = sum of:
        0.2376742 = weight(_text_:assess in 997) [ClassicSimilarity], result of:
          0.2376742 = score(doc=997,freq=4.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.64474034 = fieldWeight in 997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0546875 = fieldNorm(doc=997)
        0.059309397 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
          0.059309397 = score(doc=997,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.2708308 = fieldWeight in 997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=997)
      0.5 = coord(2/4)
    
    Abstract
    In recent years, cultural heritage institutions have been exploring the benefits of applying Linked Open Data to their catalogs and digital materials. Innovative and creative methods have emerged to publish and reuse digital contents to promote computational access, such as the concepts of Labs and Collections as Data. Data quality has become a requirement for researchers and training methods based on artificial intelligence and machine learning. This article explores how the quality of Linked Open Data made available by cultural heritage institutions can be automatically assessed. The results obtained can be useful for other institutions who wish to publish and assess their collections.
    Date
    22. 6.2023 18:23:31
  4. Batt, C.: ¬The libraries of the future : public libraries and the Internet (1996) 0.13
    0.12992597 = product of:
      0.25985193 = sum of:
        0.19206975 = weight(_text_:assess in 4862) [ClassicSimilarity], result of:
          0.19206975 = score(doc=4862,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.5210289 = fieldWeight in 4862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0625 = fieldNorm(doc=4862)
        0.06778217 = weight(_text_:22 in 4862) [ClassicSimilarity], result of:
          0.06778217 = score(doc=4862,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.30952093 = fieldWeight in 4862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=4862)
      0.5 = coord(2/4)
    
    Abstract
    Considers the possible potential for service development in public libraries offered by the Internet and describes the traditional models of network access and their lack of relevance to public libraries. Describes 2 current research projects currently being undertaken by public libraries to assess the value of the Internet to their services; ITPOINT, a project being conducted at Chelmsley Wood Library, Solihull, UK; and CLIP, the Croydon Libraries Internet peoject. Presents a range of new service paradigms and suggests that public libraries will become even more central to people's lives than they are today
    Source
    IFLA journal. 22(1996) no.1, S.27-30
  5. Hancock-Beaulieu, M.: Searching behaviour and the evaluation of online catalogues (1991) 0.13
    0.12992597 = product of:
      0.25985193 = sum of:
        0.19206975 = weight(_text_:assess in 2765) [ClassicSimilarity], result of:
          0.19206975 = score(doc=2765,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.5210289 = fieldWeight in 2765, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0625 = fieldNorm(doc=2765)
        0.06778217 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
          0.06778217 = score(doc=2765,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.30952093 = fieldWeight in 2765, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2765)
      0.5 = coord(2/4)
    
    Abstract
    Presents a brief report on a study, carried out by the Centre for Interactive Systems Research, City University, to investigate the techniques used for evaluating OPACs: to explore and assess different data gathering methods in studying information seeking behaviour at the on-line catalogue; and to examine how a transaction logging facility could be enhanced to serve as a more effective diagnostic tool. For a full report see British Library research paper 78
    Pages
    S.20-22
  6. Tomney, H.; Burton, P.F.: Electronic journals : a case study of usage and attitudes among academics (1998) 0.13
    0.12992597 = product of:
      0.25985193 = sum of:
        0.19206975 = weight(_text_:assess in 3687) [ClassicSimilarity], result of:
          0.19206975 = score(doc=3687,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.5210289 = fieldWeight in 3687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0625 = fieldNorm(doc=3687)
        0.06778217 = weight(_text_:22 in 3687) [ClassicSimilarity], result of:
          0.06778217 = score(doc=3687,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.30952093 = fieldWeight in 3687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=3687)
      0.5 = coord(2/4)
    
    Abstract
    Reports results of a questionnaire survey to assess the attitudes of scholarly users towards electronic journals and examines the current level of use of these publications by university academics in 2 departments in each of 5 faculties of a UK university
    Date
    22. 5.1999 19:07:29
  7. Rothera, H.: Framing the subject : a subject indexing model for electronic bibliographic databases in the humanities (1998) 0.13
    0.12992597 = product of:
      0.25985193 = sum of:
        0.19206975 = weight(_text_:assess in 3904) [ClassicSimilarity], result of:
          0.19206975 = score(doc=3904,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.5210289 = fieldWeight in 3904, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0625 = fieldNorm(doc=3904)
        0.06778217 = weight(_text_:22 in 3904) [ClassicSimilarity], result of:
          0.06778217 = score(doc=3904,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.30952093 = fieldWeight in 3904, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=3904)
      0.5 = coord(2/4)
    
    Abstract
    Reviews in detail an MA dissertation to assess the scope and value of electronic bibliographic databases in the humanities. Develops and demonstrates a model to determine essential and desirable indexing terms and to highlight some inherent complexities. Assesses features of commercially available databases against this model. Presents personal observations on the dissertation experience and on prospects for further research in this area
    Source
    Library and information research news. 22(1998) no.71, S.24-33
  8. Allen, B.L.: Designing information systems for user abilities and tasks : an experimental study (1998) 0.13
    0.12727869 = product of:
      0.25455737 = sum of:
        0.20372075 = weight(_text_:assess in 2664) [ClassicSimilarity], result of:
          0.20372075 = score(doc=2664,freq=4.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.5526346 = fieldWeight in 2664, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=2664)
        0.050836623 = weight(_text_:22 in 2664) [ClassicSimilarity], result of:
          0.050836623 = score(doc=2664,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.23214069 = fieldWeight in 2664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2664)
      0.5 = coord(2/4)
    
    Abstract
    With the many choices that can be built into information systems, it is possible to customize such systems for users, based on the tasks that users are accomplishing, on the personal characteristics of users, or a combination of these factors. Reports results of an experiment which detailed logging of use of experimental information systems and was used to determine the optimal configuration of these systems for each user. 4 experimental systems were specially designed and all used a single database of 668 bibliographic records. Tasks were varied, and the cognitive abilities of users were tested to assess one important personal characteristic. Results showed that it was possible to create an optimal configuration to match the cognitive abilities of users, but that it was more difficult to assess which configuration was the best match for specific tasks. The person in task interaction proved to be the least powerful indicator of design configurations. These results suggest that usable information systems can be created for users by careful analysis of the interaction of design features with personal characteristics such as cognitive abilities
    Source
    Online and CD-ROM review. 22(1998) no.3, S.139-153
  9. Mundle, K.; Huie, H.; Bangalore, N.S.: ARL Library Catalog Department Web sites : an evaluative study (2006) 0.13
    0.12514274 = product of:
      0.25028548 = sum of:
        0.20792161 = weight(_text_:assess in 771) [ClassicSimilarity], result of:
          0.20792161 = score(doc=771,freq=6.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.5640303 = fieldWeight in 771, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=771)
        0.042363856 = weight(_text_:22 in 771) [ClassicSimilarity], result of:
          0.042363856 = score(doc=771,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.19345059 = fieldWeight in 771, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=771)
      0.5 = coord(2/4)
    
    Abstract
    User-friendly and content-rich Web sites are indispensable for any knowledge-based organization. Web site evaluation studies point to ways to improve the efficiency and usability of Web sites. Library catalog or technical services department Web sites have proliferated in the past few years, but there is no systematic and accepted method that evaluates the performance of these Web sites. An earlier study by Mundle, Zhao, and Bangalore evaluated catalog department Web sites within the consortium of the Committee on Institutional Cooperation (CIC) libraries, proposed a model to assess these Web sites, and recommended desirable features for them. The present study was undertaken to test the model further and to assess the recommended features. The study evaluated the catalog department Web sites of Association of Research Libraries members. It validated the model proposed, and confirmed the use of the performance index (PI) as an objective measure to assess the usability or workability of a catalog department Web site. The model advocates using a PI of 1.5 as the benchmark for catalog department Web site evaluation by employing the study tool and scoring method suggested in this paper.
    Date
    10. 9.2000 17:38:22
  10. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.12
    0.12415478 = product of:
      0.4966191 = sum of:
        0.4966191 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
          0.4966191 = score(doc=1826,freq=2.0), product of:
            0.5301813 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.062536046 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.25 = coord(1/4)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  11. Khurshid, Z.: ¬The impact of information technology an job requirements and qualifications for catalogers (2003) 0.11
    0.11368521 = product of:
      0.22737043 = sum of:
        0.16806103 = weight(_text_:assess in 2323) [ClassicSimilarity], result of:
          0.16806103 = score(doc=2323,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.45590025 = fieldWeight in 2323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2323)
        0.059309397 = weight(_text_:22 in 2323) [ClassicSimilarity], result of:
          0.059309397 = score(doc=2323,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.2708308 = fieldWeight in 2323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2323)
      0.5 = coord(2/4)
    
    Abstract
    Information technology (IT) encompassing an integrated library system, computer hardware and software, CDROM, Internet, and other domains, including MARC 21 formats, CORC, and metadata standards (Dublin Core, TEI, XML, RDF) has produced far-reaching changes in the job functions of catalogers. Libraries are now coming up with a new set of recruiting requirements for these positions. This paper aims to review job advertisements published in American Libraries (AL) and College and Research Libraries News (C&RL NEWS) to assess the impact of the use of IT in libraries an job requirements and qualifications for catalogers.
    Source
    Information technology and libraries. 22(2003) no. March, S.18-21
  12. Newman, D.J.; Block, S.: Probabilistic topic decomposition of an eighteenth-century American newspaper (2006) 0.11
    0.11368521 = product of:
      0.22737043 = sum of:
        0.16806103 = weight(_text_:assess in 5291) [ClassicSimilarity], result of:
          0.16806103 = score(doc=5291,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.45590025 = fieldWeight in 5291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5291)
        0.059309397 = weight(_text_:22 in 5291) [ClassicSimilarity], result of:
          0.059309397 = score(doc=5291,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.2708308 = fieldWeight in 5291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5291)
      0.5 = coord(2/4)
    
    Abstract
    We use a probabilistic mixture decomposition method to determine topics in the Pennsylvania Gazette, a major colonial U.S. newspaper from 1728-1800. We assess the value of several topic decomposition techniques for historical research and compare the accuracy and efficacy of various methods. After determining the topics covered by the 80,000 articles and advertisements in the entire 18th century run of the Gazette, we calculate how the prevalence of those topics changed over time, and give historically relevant examples of our findings. This approach reveals important information about the content of this colonial newspaper, and suggests the value of such approaches to a more complete understanding of early American print culture and society.
    Date
    22. 7.2006 17:32:00
  13. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.11
    0.11368521 = product of:
      0.22737043 = sum of:
        0.16806103 = weight(_text_:assess in 88) [ClassicSimilarity], result of:
          0.16806103 = score(doc=88,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.45590025 = fieldWeight in 88, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0546875 = fieldNorm(doc=88)
        0.059309397 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
          0.059309397 = score(doc=88,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.2708308 = fieldWeight in 88, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=88)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents a study conducted at the University of Colorado at Boulder (CU-Boulder) to assess the extent to which its catalogers were using Classification Web (Class Web), the subscription-based, online cataloging documentation resource provided by the Library of Congress. In addition, this paper will explore assumptions made by management regarding CU-Boulder catalogers' use of the product, possible reasons for the lower-than-expected use, and recommendations for promoting a more efficient and cost-effective use of Class Web at other institutions similar to CU-Boulder.
    Date
    10. 9.2000 17:38:22
  14. Dalip, D.H.; Gonçalves, M.A.; Cristo, M.; Calado, P.: ¬A general multiview framework for assessing the quality of collaboratively created content on web 2.0 (2017) 0.11
    0.106065564 = product of:
      0.21213113 = sum of:
        0.16976728 = weight(_text_:assess in 3343) [ClassicSimilarity], result of:
          0.16976728 = score(doc=3343,freq=4.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.4605288 = fieldWeight in 3343, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3343)
        0.042363856 = weight(_text_:22 in 3343) [ClassicSimilarity], result of:
          0.042363856 = score(doc=3343,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.19345059 = fieldWeight in 3343, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3343)
      0.5 = coord(2/4)
    
    Abstract
    User-generated content is one of the most interesting phenomena of current published media, as users are now able not only to consume, but also to produce content in a much faster and easier manner. However, such freedom also carries concerns about content quality. In this work, we propose an automatic framework to assess the quality of collaboratively generated content. Quality is addressed as a multidimensional concept, modeled as a combination of independent assessments, each regarding different quality dimensions. Accordingly, we adopt a machine-learning (ML)-based multiview approach to assess content quality. We perform a thorough analysis of our framework on two different domains: Questions and Answer Forums and Collaborative Encyclopedias. This allowed us to better understand when and how the proposed multiview approach is able to provide accurate quality assessments. Our main contributions are: (a) a general ML multiview framework that takes advantage of different views of quality indicators; (b) the improvement (up to 30%) in quality assessment over the best state-of-the-art baseline methods; (c) a thorough feature and view analysis regarding impact, informativeness, and correlation, based on two distinct domains.
    Date
    16.11.2017 13:04:22
  15. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.10
    0.09932382 = product of:
      0.39729527 = sum of:
        0.39729527 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
          0.39729527 = score(doc=230,freq=2.0), product of:
            0.5301813 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.062536046 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.25 = coord(1/4)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  16. Mood, T.A.: Of sundials and digital watches : a further step toward the new paradigm of reference (1994) 0.10
    0.09744447 = product of:
      0.19488893 = sum of:
        0.14405231 = weight(_text_:assess in 166) [ClassicSimilarity], result of:
          0.14405231 = score(doc=166,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 166, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=166)
        0.050836623 = weight(_text_:22 in 166) [ClassicSimilarity], result of:
          0.050836623 = score(doc=166,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.23214069 = fieldWeight in 166, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=166)
      0.5 = coord(2/4)
    
    Abstract
    The new paradigm of reference, in which the reference librarian becomes a consultant more than a quick-answer specialist, needs to be stretched, Mood advocates. Rather than assisting people with their research, the reference librarian needs to do the research for them. After an interview to assess the user's needs, the librarian searches various print and nonprint access tools, then presents to the patron a bibliography of sources and - possibly - copies of articles and books. This new approach to reference is needed because of both the increasing complication of libraries, with their myriad computer access points to information, and the increasing number of patrons who want information but do not want to learn how to retrieve it. This change in library reference can be implemented with better signage, more prepackaging of information, and an increased knowledge of the local community's information needs
    Source
    Reference services review. 22(1994) no.3, S.27-32
  17. Chaudhry, A.S.; Ashoor, S: Functional performance of automated systems : a comparative study of HORIZON, INNOPAC and VTLS (1998) 0.10
    0.09744447 = product of:
      0.19488893 = sum of:
        0.14405231 = weight(_text_:assess in 3022) [ClassicSimilarity], result of:
          0.14405231 = score(doc=3022,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 3022, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=3022)
        0.050836623 = weight(_text_:22 in 3022) [ClassicSimilarity], result of:
          0.050836623 = score(doc=3022,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.23214069 = fieldWeight in 3022, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3022)
      0.5 = coord(2/4)
    
    Abstract
    Provides functional performance data drawn from an analysis of the capabilities and functionality of 3 major library automation systems: HORIZON, INNOPAC and Virginia Tech Library System (VTLS). The assessment was based on vendor input as well as on feedback from libraries of different types from different parts of the world. Objective criteria based on a numerical scoring scheme was used to assess system performance in 6 major functional areas: acquisition; cataloguing; circulation; OPAC; reference and information services; and serials control. The functional performance data is expected to be useful for libraries loking for new systems as well as those already computerised and interested in enhancing their present systems. In addition, data on the extent of the utilisation of system capabilities by libraries should also be of interest to system vendors
    Date
    22. 2.1999 14:03:24
  18. Wan, X.; Liu, F.: Are all literature citations equally important? : automatic citation strength estimation and its applications (2014) 0.10
    0.09744447 = product of:
      0.19488893 = sum of:
        0.14405231 = weight(_text_:assess in 1350) [ClassicSimilarity], result of:
          0.14405231 = score(doc=1350,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 1350, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=1350)
        0.050836623 = weight(_text_:22 in 1350) [ClassicSimilarity], result of:
          0.050836623 = score(doc=1350,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.23214069 = fieldWeight in 1350, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=1350)
      0.5 = coord(2/4)
    
    Abstract
    Literature citation analysis plays a very important role in bibliometrics and scientometrics, such as the Science Citation Index (SCI) impact factor, h-index. Existing citation analysis methods assume that all citations in a paper are equally important, and they simply count the number of citations. Here we argue that the citations in a paper are not equally important and some citations are more important than the others. We use a strength value to assess the importance of each citation and propose to use the regression method with a few useful features for automatically estimating the strength value of each citation. Evaluation results on a manually labeled data set in the computer science field show that the estimated values can achieve good correlation with human-labeled values. We further apply the estimated citation strength values for evaluating paper influence and author influence, and the preliminary evaluation results demonstrate the usefulness of the citation strength values.
    Date
    22. 8.2014 17:12:35
  19. Ding, Y.; Zhang, G.; Chambers, T.; Song, M.; Wang, X.; Zhai, C.: Content-based citation analysis : the next generation of citation analysis (2014) 0.10
    0.09744447 = product of:
      0.19488893 = sum of:
        0.14405231 = weight(_text_:assess in 1521) [ClassicSimilarity], result of:
          0.14405231 = score(doc=1521,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 1521, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=1521)
        0.050836623 = weight(_text_:22 in 1521) [ClassicSimilarity], result of:
          0.050836623 = score(doc=1521,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.23214069 = fieldWeight in 1521, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=1521)
      0.5 = coord(2/4)
    
    Abstract
    Traditional citation analysis has been widely applied to detect patterns of scientific collaboration, map the landscapes of scholarly disciplines, assess the impact of research outputs, and observe knowledge transfer across domains. It is, however, limited, as it assumes all citations are of similar value and weights each equally. Content-based citation analysis (CCA) addresses a citation's value by interpreting each one based on its context at both the syntactic and semantic levels. This paper provides a comprehensive overview of CAA research in terms of its theoretical foundations, methodical approaches, and example applications. In addition, we highlight how increased computational capabilities and publicly available full-text resources have opened this area of research to vast possibilities, which enable deeper citation analysis, more accurate citation prediction, and increased knowledge discovery.
    Date
    22. 8.2014 16:52:04
  20. Thelwall, M.; Sud, P.: Mendeley readership counts : an investigation of temporal and disciplinary differences (2016) 0.10
    0.09744447 = product of:
      0.19488893 = sum of:
        0.14405231 = weight(_text_:assess in 3211) [ClassicSimilarity], result of:
          0.14405231 = score(doc=3211,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 3211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=3211)
        0.050836623 = weight(_text_:22 in 3211) [ClassicSimilarity], result of:
          0.050836623 = score(doc=3211,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.23214069 = fieldWeight in 3211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3211)
      0.5 = coord(2/4)
    
    Abstract
    Scientists and managers using citation-based indicators to help evaluate research cannot evaluate recent articles because of the time needed for citations to accrue. Reading occurs before citing, however, and so it makes sense to count readers rather than citations for recent publications. To assess this, Mendeley readers and citations were obtained for articles from 2004 to late 2014 in five broad categories (agriculture, business, decision science, pharmacy, and the social sciences) and 50 subcategories. In these areas, citation counts tended to increase with every extra year since publication, and readership counts tended to increase faster initially but then stabilize after about 5 years. The correlation between citations and readers was also higher for longer time periods, stabilizing after about 5 years. Although there were substantial differences between broad fields and smaller differences between subfields, the results confirm the value of Mendeley reader counts as early scientific impact indicators.
    Date
    16.11.2016 11:07:22

Types

  • a 2203
  • m 162
  • s 100
  • el 82
  • b 32
  • r 14
  • x 9
  • i 3
  • n 2
  • p 2
  • h 1
  • More… Less…

Themes

Subjects

Classifications