Search (792 results, page 1 of 40)

  • × year_i:[2010 TO 2020}
  1. Jiang, Z.; Gu, Q.; Yin, Y.; Wang, J.; Chen, D.: GRAW+ : a two-view graph propagation method with word coupling for readability assessment (2019) 0.09
    0.09119919 = product of:
      0.18239838 = sum of:
        0.18239838 = sum of:
          0.14801835 = weight(_text_:assessment in 5218) [ClassicSimilarity], result of:
            0.14801835 = score(doc=5218,freq=6.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.5282689 = fieldWeight in 5218, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5218)
          0.03438003 = weight(_text_:22 in 5218) [ClassicSimilarity], result of:
            0.03438003 = score(doc=5218,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.19345059 = fieldWeight in 5218, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5218)
      0.5 = coord(1/2)
    
    Abstract
    Existing methods for readability assessment usually construct inductive classification models to assess the readability of singular text documents based on extracted features, which have been demonstrated to be effective. However, they rarely make use of the interrelationship among documents on readability, which can help increase the accuracy of readability assessment. In this article, we adopt a graph-based classification method to model and utilize the relationship among documents using the coupled bag-of-words model. We propose a word coupling method to build the coupled bag-of-words model by estimating the correlation between words on reading difficulty. In addition, we propose a two-view graph propagation method to make use of both the coupled bag-of-words model and the linguistic features. Our method employs a graph merging operation to combine graphs built according to different views, and improves the label propagation by incorporating the ordinal relation among reading levels. Experiments were conducted on both English and Chinese data sets, and the results demonstrate both effectiveness and potential of the method.
    Date
    15. 4.2019 13:46:22
  2. Stalberg, E.; Cronin, C.: Assessing the cost and value of bibliographic control (2011) 0.08
    0.083886914 = product of:
      0.16777383 = sum of:
        0.16777383 = sum of:
          0.11964179 = weight(_text_:assessment in 2592) [ClassicSimilarity], result of:
            0.11964179 = score(doc=2592,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.4269946 = fieldWeight in 2592, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2592)
          0.048132043 = weight(_text_:22 in 2592) [ClassicSimilarity], result of:
            0.048132043 = score(doc=2592,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.2708308 = fieldWeight in 2592, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2592)
      0.5 = coord(1/2)
    
    Abstract
    In June 2009, the Association for Library Collections and Technical Services Heads of Technical Services in Large Research Libraries Interest Group established the Task Force on Cost/Value Assessment of Bibliographic Control to address recommendation 5.1.1.1 of On the Record: Report of the Library of Congress Working Group on the Future of Bibliographic Control, which focused on developing measures for costs, benefits, and value of bibliographic control. This paper outlines results of that task force's efforts to develop and articulate metrics for evaluating the cost and value of cataloging activities specifically, and offers some next steps that the community could take to further the profession's collective understanding of the costs and values associated with bibliographic control.
    Date
    10. 9.2000 17:38:22
  3. Mugridge, R.L.; Edmunds, J.: Batchloading MARC bibliographic records (2012) 0.08
    0.083886914 = product of:
      0.16777383 = sum of:
        0.16777383 = sum of:
          0.11964179 = weight(_text_:assessment in 2600) [ClassicSimilarity], result of:
            0.11964179 = score(doc=2600,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.4269946 = fieldWeight in 2600, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2600)
          0.048132043 = weight(_text_:22 in 2600) [ClassicSimilarity], result of:
            0.048132043 = score(doc=2600,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.2708308 = fieldWeight in 2600, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2600)
      0.5 = coord(1/2)
    
    Abstract
    Research libraries are using batchloading to provide access to many resources that they would otherwise be unable to catalog given the staff and other resources available. To explore how such libraries are managing their batchloading activities, the authors conducted a survey of the Association for Library Collections and Technical Services Directors of Large Research Libraries Interest Group member libraries. The survey addressed staffing, budgets, scope, workflow, management, quality standards, information technology support, collaborative efforts, and assessment of batchloading activities. The authors provide an analysis of the survey results along with suggestions for process improvements and future research.
    Date
    10. 9.2000 17:38:22
  4. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.08
    0.08060541 = product of:
      0.16121082 = sum of:
        0.16121082 = product of:
          0.48363245 = sum of:
            0.48363245 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.48363245 = score(doc=973,freq=2.0), product of:
                0.43026417 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050750602 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  5. Pal, S.; Mitra, M.; Kamps, J.: Evaluation effort, reliability and reusability in XML retrieval (2011) 0.08
    0.07761824 = product of:
      0.15523648 = sum of:
        0.15523648 = sum of:
          0.12085646 = weight(_text_:assessment in 4197) [ClassicSimilarity], result of:
            0.12085646 = score(doc=4197,freq=4.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.43132967 = fieldWeight in 4197, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4197)
          0.03438003 = weight(_text_:22 in 4197) [ClassicSimilarity], result of:
            0.03438003 = score(doc=4197,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.19345059 = fieldWeight in 4197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4197)
      0.5 = coord(1/2)
    
    Abstract
    The Initiative for the Evaluation of XML retrieval (INEX) provides a TREC-like platform for evaluating content-oriented XML retrieval systems. Since 2007, INEX has been using a set of precision-recall based metrics for its ad hoc tasks. The authors investigate the reliability and robustness of these focused retrieval measures, and of the INEX pooling method. They explore four specific questions: How reliable are the metrics when assessments are incomplete, or when query sets are small? What is the minimum pool/query-set size that can be used to reliably evaluate systems? Can the INEX collections be used to fairly evaluate "new" systems that did not participate in the pooling process? And, for a fixed amount of assessment effort, would this effort be better spent in thoroughly judging a few queries, or in judging many queries relatively superficially? The authors' findings validate properties of precision-recall-based metrics observed in document retrieval settings. Early precision measures are found to be more error-prone and less stable under incomplete judgments and small topic-set sizes. They also find that system rankings remain largely unaffected even when assessment effort is substantially (but systematically) reduced, and confirm that the INEX collections remain usable when evaluating nonparticipating systems. Finally, they observe that for a fixed amount of effort, judging shallow pools for many queries is better than judging deep pools for a smaller set of queries. However, when judging only a random sample of a pool, it is better to completely judge fewer topics than to partially judge many topics. This result confirms the effectiveness of pooling methods.
    Date
    22. 1.2011 14:20:56
  6. Corts Mendes, L.; Pacini de Moura, A.: Documentation as knowledge organization : an assessment of Paul Otlet's proposals (2014) 0.08
    0.07761824 = product of:
      0.15523648 = sum of:
        0.15523648 = sum of:
          0.12085646 = weight(_text_:assessment in 1471) [ClassicSimilarity], result of:
            0.12085646 = score(doc=1471,freq=4.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.43132967 = fieldWeight in 1471, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1471)
          0.03438003 = weight(_text_:22 in 1471) [ClassicSimilarity], result of:
            0.03438003 = score(doc=1471,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.19345059 = fieldWeight in 1471, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1471)
      0.5 = coord(1/2)
    
    Abstract
    This paper proposes an assessment of Paul Otlet's Documentation anchored in Birger Hjørland's argument that the field of Knowledge Organization (KO) must be formed by two interdependent views, a broad conception on how knowledge is socially and intellectually produced and organized, and a narrow view that deals with the organization of the documents that register knowledge. Otlet's conceptions of individual and collective knowledge are addressed, as well as the role of documents in its conservation and communication, in order to show how the intended universal application of Documentation's principles and methods was supposed to make registered knowledge easily accessible and clearly apprehended as a unified whole. It concludes that Otlet's Documentation fulfils in its own context the requirement claimed by Hjørland for the KO field of narrow conceptions being sustained by broader views of the organization of knowledge, and therefore qualifies itself as a historical component of KO, being capable of contributing as such to its epistemological and theoretical discussions.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  7. Devaul, H.; Diekema, A.R.; Ostwald, J.: Computer-assisted assignment of educational standards using natural language processing (2011) 0.07
    0.071903065 = product of:
      0.14380613 = sum of:
        0.14380613 = sum of:
          0.1025501 = weight(_text_:assessment in 4199) [ClassicSimilarity], result of:
            0.1025501 = score(doc=4199,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.36599535 = fieldWeight in 4199, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.046875 = fieldNorm(doc=4199)
          0.041256037 = weight(_text_:22 in 4199) [ClassicSimilarity], result of:
            0.041256037 = score(doc=4199,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.23214069 = fieldWeight in 4199, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4199)
      0.5 = coord(1/2)
    
    Abstract
    Educational standards are a central focus of the current educational system in the United States, underpinning educational practice, curriculum design, teacher professional development, and high-stakes testing and assessment. Digital library users have requested that this information be accessible in association with digital learning resources to support teaching and learning as well as accountability requirements. Providing this information is complex because of the variability and number of standards documents in use at the national, state, and local level. This article describes a cataloging tool that aids catalogers in the assignment of standards metadata to digital library resources, using natural language processing techniques. The research explores whether the standards suggestor service would suggest the same standards as a human, whether relevant standards are ranked appropriately in the result set, and whether the relevance of the suggested assignments improve when, in addition to resource content, metadata is included in the query to the cataloging tool. The article also discusses how this service might streamline the cataloging workflow.
    Date
    22. 1.2011 14:25:32
  8. Didegah, F.; Thelwall, M.: Co-saved, co-tweeted, and co-cited networks (2018) 0.07
    0.071903065 = product of:
      0.14380613 = sum of:
        0.14380613 = sum of:
          0.1025501 = weight(_text_:assessment in 4291) [ClassicSimilarity], result of:
            0.1025501 = score(doc=4291,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.36599535 = fieldWeight in 4291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.046875 = fieldNorm(doc=4291)
          0.041256037 = weight(_text_:22 in 4291) [ClassicSimilarity], result of:
            0.041256037 = score(doc=4291,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.23214069 = fieldWeight in 4291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4291)
      0.5 = coord(1/2)
    
    Abstract
    Counts of tweets and Mendeley user libraries have been proposed as altmetric alternatives to citation counts for the impact assessment of articles. Although both have been investigated to discover whether they correlate with article citations, it is not known whether users tend to tweet or save (in Mendeley) the same kinds of articles that they cite. In response, this article compares pairs of articles that are tweeted, saved to a Mendeley library, or cited by the same user, but possibly a different user for each source. The study analyzes 1,131,318 articles published in 2012, with minimum tweeted (10), saved to Mendeley (100), and cited (10) thresholds. The results show surprisingly minor overall overlaps between the three phenomena. The importance of journals for Twitter and the presence of many bots at different levels of activity suggest that this site has little value for impact altmetrics. The moderate differences between patterns of saving and citation suggest that Mendeley can be used for some types of impact assessments, but sensitivity is needed for underlying differences.
    Date
    28. 7.2018 10:00:22
  9. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.06717118 = product of:
      0.13434236 = sum of:
        0.13434236 = product of:
          0.40302706 = sum of:
            0.40302706 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.40302706 = score(doc=1826,freq=2.0), product of:
                0.43026417 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050750602 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  10. Moed, H.F.; Halevi, G.: Multidimensional assessment of scholarly research impact (2015) 0.06
    0.06042823 = product of:
      0.12085646 = sum of:
        0.12085646 = product of:
          0.24171291 = sum of:
            0.24171291 = weight(_text_:assessment in 2212) [ClassicSimilarity], result of:
              0.24171291 = score(doc=2212,freq=16.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.86265934 = fieldWeight in 2212, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2212)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article introduces the Multidimensional Research Assessment Matrix of scientific output. Its base notion holds that the choice of metrics to be applied in a research assessment process depends on the unit of assessment, the research dimension to be assessed, and the purposes and policy context of the assessment. An indicator may by highly useful within one assessment process, but less so in another. For instance, publication counts are useful tools to help discriminate between those staff members who are research active, and those who are not, but are of little value if active scientists are to be compared with one another according to their research performance. This paper gives a systematic account of the potential usefulness and limitations of a set of 10 important metrics, including altmetrics, applied at the level of individual articles, individual researchers, research groups, and institutions. It presents a typology of research impact dimensions and indicates which metrics are the most appropriate to measure each dimension. It introduces the concept of a "meta-analysis" of the units under assessment in which metrics are not used as tools to evaluate individual units, but to reach policy inferences regarding the objectives and general setup of an assessment process.
  11. Nathan, L.P.: Sustainable information practice : an ethnographic investigation (2012) 0.06
    0.059919223 = product of:
      0.11983845 = sum of:
        0.11983845 = sum of:
          0.08545842 = weight(_text_:assessment in 496) [ClassicSimilarity], result of:
            0.08545842 = score(doc=496,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.30499613 = fieldWeight in 496, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=496)
          0.03438003 = weight(_text_:22 in 496) [ClassicSimilarity], result of:
            0.03438003 = score(doc=496,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.19345059 = fieldWeight in 496, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=496)
      0.5 = coord(1/2)
    
    Abstract
    This project develops the concept of sustainable information practice within the field of information science. The inquiry is grounded by data from a study of 2 ecovillages, intentional communities striving to ground their daily activities in a set of core values related to sustainability. Ethnographic methods employed for over 2 years resulted in data from hundreds of hours of participant observation, semistructured interviews with 22 community members, and a diverse collection of community images and texts. Analysis of the data highlights the tensions that arose and remained as community members experienced breakdowns between community values related to sustainability and their daily information practices. Contributions to the field of information science include the development of the concept of sustainable information practice, an analysis of why community members felt unable to adapt their information practices to better match community concepts of sustainability, and an assessment of the methodological challenges of information practice inquiry within a communal, nonwork environment. Most broadly, this work contributes to our larger understanding of the challenges faced by those attempting to identify and develop more sustainable information practices. In addition, findings from this investigation call into question previous claims that groups of individuals with strong value commitments can adapt their use of information tools to better support their values. In contrast, this work suggests that information practices can be particularly resilient to local, value-based adaptation.
  12. Chew, S.W.; Khoo, K.S.G.: Comparison of drug information on consumer drug review sites versus authoritative health information websites (2016) 0.06
    0.059919223 = product of:
      0.11983845 = sum of:
        0.11983845 = sum of:
          0.08545842 = weight(_text_:assessment in 2643) [ClassicSimilarity], result of:
            0.08545842 = score(doc=2643,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.30499613 = fieldWeight in 2643, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2643)
          0.03438003 = weight(_text_:22 in 2643) [ClassicSimilarity], result of:
            0.03438003 = score(doc=2643,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.19345059 = fieldWeight in 2643, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2643)
      0.5 = coord(1/2)
    
    Abstract
    Large amounts of health-related information of different types are available on the web. In addition to authoritative health information sites maintained by government health departments and healthcare institutions, there are many social media sites carrying user-contributed information. This study sought to identify the types of drug information available on consumer-contributed drug review sites when compared with authoritative drug information websites. Content analysis was performed on the information available for nine drugs on three authoritative sites (RxList, eMC, and PDRhealth) as well as three drug review sites (WebMD, RateADrug, and PatientsLikeMe). The types of information found on authoritative sites but rarely on drug review sites include pharmacology, special population considerations, contraindications, and drug interactions. Types of information found only on drug review sites include drug efficacy, drug resistance experienced by long-term users, cost of drug in relation to insurance coverage, availability of generic forms, comparison with other similar drugs and with other versions of the drug, difficulty in using the drug, and advice on coping with side effects. Drug efficacy ratings by users were found to be different across the three sites. Side effects were vividly described in context, with user assessment of severity based on discomfort and effect on their lives.
    Date
    22. 1.2016 12:24:05
  13. Dalip, D.H.; Gonçalves, M.A.; Cristo, M.; Calado, P.: ¬A general multiview framework for assessing the quality of collaboratively created content on web 2.0 (2017) 0.06
    0.059919223 = product of:
      0.11983845 = sum of:
        0.11983845 = sum of:
          0.08545842 = weight(_text_:assessment in 3343) [ClassicSimilarity], result of:
            0.08545842 = score(doc=3343,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.30499613 = fieldWeight in 3343, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3343)
          0.03438003 = weight(_text_:22 in 3343) [ClassicSimilarity], result of:
            0.03438003 = score(doc=3343,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.19345059 = fieldWeight in 3343, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3343)
      0.5 = coord(1/2)
    
    Abstract
    User-generated content is one of the most interesting phenomena of current published media, as users are now able not only to consume, but also to produce content in a much faster and easier manner. However, such freedom also carries concerns about content quality. In this work, we propose an automatic framework to assess the quality of collaboratively generated content. Quality is addressed as a multidimensional concept, modeled as a combination of independent assessments, each regarding different quality dimensions. Accordingly, we adopt a machine-learning (ML)-based multiview approach to assess content quality. We perform a thorough analysis of our framework on two different domains: Questions and Answer Forums and Collaborative Encyclopedias. This allowed us to better understand when and how the proposed multiview approach is able to provide accurate quality assessments. Our main contributions are: (a) a general ML multiview framework that takes advantage of different views of quality indicators; (b) the improvement (up to 30%) in quality assessment over the best state-of-the-art baseline methods; (c) a thorough feature and view analysis regarding impact, informativeness, and correlation, based on two distinct domains.
    Date
    16.11.2017 13:04:22
  14. Thompson, S.; Reilly, M.: ¬"A picture is worth a thousand words" : reverse image lookup and digital library assessment (2017) 0.06
    0.059820894 = product of:
      0.11964179 = sum of:
        0.11964179 = product of:
          0.23928358 = sum of:
            0.23928358 = weight(_text_:assessment in 3795) [ClassicSimilarity], result of:
              0.23928358 = score(doc=3795,freq=8.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.8539892 = fieldWeight in 3795, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3795)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This brief communication builds on the application of content-based image retrieval (CBIR) and reverse image lookup (RIL), a graduated form of CBIR, as assessment tools for digital library image reuse. It combines literature on the definition, history, usefulness, and limitations of RIL and includes a brief analysis of the 4 published digital library image reuse assessment case studies. In its conclusion, the communication paper proposes that RIL offers benefits for digital library managers in the assessment of their collections.
  15. Haustein, S.; Sugimoto, C.; Larivière, V.: Social media in scholarly communication : Guest editorial (2015) 0.05
    0.05471951 = product of:
      0.10943902 = sum of:
        0.10943902 = sum of:
          0.088811 = weight(_text_:assessment in 3809) [ClassicSimilarity], result of:
            0.088811 = score(doc=3809,freq=6.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.31696132 = fieldWeight in 3809, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3809)
          0.020628018 = weight(_text_:22 in 3809) [ClassicSimilarity], result of:
            0.020628018 = score(doc=3809,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.116070345 = fieldWeight in 3809, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3809)
      0.5 = coord(1/2)
    
    Abstract
    Furthermore, the rise of the web, and subsequently, the social web, has challenged the quasi-monopolistic status of the journal as the main form of scholarly communication and citation indices as the primary assessment mechanisms. Scientific communication is becoming more open, transparent, and diverse: publications are increasingly open access; manuscripts, presentations, code, and data are shared online; research ideas and results are discussed and criticized openly on blogs; and new peer review experiments, with open post publication assessment by anonymous or non-anonymous referees, are underway. The diversification of scholarly production and assessment, paired with the increasing speed of the communication process, leads to an increased information overload (Bawden and Robinson, 2008), demanding new filters. The concept of altmetrics, short for alternative (to citation) metrics, was created out of an attempt to provide a filter (Priem et al., 2010) and to steer against the oversimplification of the measurement of scientific success solely on the basis of number of journal articles published and citations received, by considering a wider range of research outputs and metrics (Piwowar, 2013). Although the term altmetrics was introduced in a tweet in 2010 (Priem, 2010), the idea of capturing traces - "polymorphous mentioning" (Cronin et al., 1998, p. 1320) - of scholars and their documents on the web to measure "impact" of science in a broader manner than citations was introduced years before, largely in the context of webometrics (Almind and Ingwersen, 1997; Thelwall et al., 2005):
    Date
    20. 1.2015 18:30:22
  16. Maemura, E.; Moles, N.; Becker, C.: Organizational assessment frameworks for digital preservation : a literature review and mapping (2017) 0.05
    0.05233238 = product of:
      0.10466476 = sum of:
        0.10466476 = product of:
          0.20932952 = sum of:
            0.20932952 = weight(_text_:assessment in 3743) [ClassicSimilarity], result of:
              0.20932952 = score(doc=3743,freq=12.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.7470849 = fieldWeight in 3743, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3743)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As the field of digital preservation (DP) matures, there is an increasing need to systematically assess an organization's abilities to achieve its digital preservation goals, and a wide variety of assessment tools have been created for this purpose. This article aims to map the landscape of research in this area, evaluate the current maturity of knowledge on this central question in DP and provide direction for future research. To do so, this paper reviews assessment frameworks in digital preservation through a systematic literature search and categorizes the literature by type of research. The analysis shows that publication output around assessment in digital preservation has increased markedly over time, but most existing work focuses on developing new models rather than rigorous evaluation and validation of existing frameworks. Significant gaps are present in the application of robust conceptual foundations and design methods, and in the level of empirical evidence available to enable the evaluation and validation of assessment models. The analysis and comparison with other fields suggest that the design of assessment models in DP should be studied rigorously in both theory and practice, and that the development of future models will benefit from applying existing methods, processes, and principles for model design.
  17. Choi, W.; Stvilia, B.: Web credibility assessment : conceptualization, operationalization, variability, and models (2015) 0.05
    0.05180642 = product of:
      0.10361284 = sum of:
        0.10361284 = product of:
          0.20722568 = sum of:
            0.20722568 = weight(_text_:assessment in 2469) [ClassicSimilarity], result of:
              0.20722568 = score(doc=2469,freq=6.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.7395764 = fieldWeight in 2469, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2469)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article reviews theoretical and empirical studies on information credibility, with particular questions as to how scholars have conceptualized credibility, which is known as a multifaceted concept with underlying dimensions; how credibility has been operationalized and measured in empirical studies, especially in the web context; what are the important user characteristics that contribute to the variability of web credibility assessment; and how the process of web credibility assessment has been theorized. An agenda for future research on information credibility is also discussed.
  18. Jaric, I.: ¬The use of h-index for the assessment of journals' performance will lead to shifts in editorial policies (2011) 0.05
    0.05127505 = product of:
      0.1025501 = sum of:
        0.1025501 = product of:
          0.2051002 = sum of:
            0.2051002 = weight(_text_:assessment in 4949) [ClassicSimilarity], result of:
              0.2051002 = score(doc=4949,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.7319907 = fieldWeight in 4949, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4949)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.05
    0.04793538 = product of:
      0.09587076 = sum of:
        0.09587076 = sum of:
          0.068366736 = weight(_text_:assessment in 1634) [ClassicSimilarity], result of:
            0.068366736 = score(doc=1634,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.2439969 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.027504025 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.027504025 = score(doc=1634,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  20. Hackett, P.M.W.: Facet theory and the mapping sentence : evolving philosophy, use and application (2014) 0.05
    0.04793538 = product of:
      0.09587076 = sum of:
        0.09587076 = sum of:
          0.068366736 = weight(_text_:assessment in 2258) [ClassicSimilarity], result of:
            0.068366736 = score(doc=2258,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.2439969 = fieldWeight in 2258, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.03125 = fieldNorm(doc=2258)
          0.027504025 = weight(_text_:22 in 2258) [ClassicSimilarity], result of:
            0.027504025 = score(doc=2258,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.15476047 = fieldWeight in 2258, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2258)
      0.5 = coord(1/2)
    
    Content
    1 Introduction; 2 Ontological Categorisation and Mereology; Human assessment; Categories and the properties of experiential events; Mathematical, computing, artificial intelligence and library classification approaches; Sociological approaches; Psychological approaches; Personal Construct Theory; Philosophical approaches to categories; Mereology: facet theory and relationships between categories; Neuroscience and categories; Conclusions; 3 Facet Theory and Thinking about Human Behaviour Generating knowledge in facet theory: a brief overviewWhat is facet theory?; Facets and facet elements; The mapping sentence; Designing a mapping sentence; Narrative; Roles that facets play; Single-facet structures: axial role and modular role; Polar role; Circumplex; Two-facet structures; Radex; Three-facet structures; Cylindrex; Analysing facet theory research; Conclusions; 4 Evolving Facet Theory Applications; The evolution of facet theory; Mapping a domain: the mapping sentence as a stand-alone approach and integrative tool; Making and understanding fine art; Defining the grid: a mapping sentence for grid imagesFacet sort-technique; Facet mapping therapy: using the mapping sentence and the facet structures to explore client issues; Research program coordination; Conclusions and Future Directions; Glossary of Terms; Bibliography; Index
    Date
    17.10.2015 17:22:01

Languages

  • e 604
  • d 180
  • a 1
  • hu 1
  • More… Less…

Types

  • a 700
  • el 63
  • m 48
  • s 18
  • x 12
  • r 7
  • b 5
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications