Search (4 results, page 1 of 1)

  • × author_ss:"Chowdhury, G."
  • × language_ss:"e"
  • × year_i:[2010 TO 2020}
  1. Syazillah, N.H.; Kiran, K.; Chowdhury, G.: Adaptation, translation, and validation of information literacy assessment instrument (2018) 0.05
    0.047131833 = product of:
      0.094263665 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 4371) [ClassicSimilarity], result of:
              0.023542227 = score(doc=4371,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 4371, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4371)
          0.25 = coord(1/4)
        0.08837811 = product of:
          0.17675622 = sum of:
            0.17675622 = weight(_text_:assessment in 4371) [ClassicSimilarity], result of:
              0.17675622 = score(doc=4371,freq=10.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.6819921 = fieldWeight in 4371, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4371)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The assessment of information literacy (IL) at the school level is mainly dependent on the measurement tools developed by the Western world. These tools need to be efficiently adapted and in most cases translated to allow them to be utilized in other cultures, languages, and countries. To date, there have been no standard guidelines to adapt these tools; hence, the results may be cross-culturally generalized to a certain extent. Furthermore, most data analyses produce generic outcomes without taking into account the ability of the students, including the difficulty of the test items. The present study proposes a systematic approach for context adaptation and language translation of the preexisting IL assessment tool known as TRAILS-9 to be used in different languages and context, particularly a Malaysian public secondary school. This study further administers a less common psychometric approach, the Rasch analysis, to validate the adapted instrument. This technique produces a hierarchy of item difficulty within the assessment domain that enables the ability level of the students to be differentiated based on item difficulty. The recommended scale adaptation guidelines are able to reduce the misinterpretation of scores from instruments in multiple languages as well as contribute to parallel development of IL assessment among secondary school students from different populations.
  2. Chowdhury, G.: ¬An agenda for green information retrieval research (2012) 0.02
    0.024938494 = product of:
      0.049876988 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 2724) [ClassicSimilarity], result of:
              0.018833783 = score(doc=2724,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 2724, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2724)
          0.25 = coord(1/4)
        0.04516854 = weight(_text_:term in 2724) [ClassicSimilarity], result of:
          0.04516854 = score(doc=2724,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.20621133 = fieldWeight in 2724, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=2724)
      0.5 = coord(2/4)
    
    Abstract
    Nowadays we use information retrieval systems and services as part of our many day-to-day activities ranging from a web and database search to searching for various digital libraries, audio and video collections/services, and so on. However, IR systems and services make extensive use of ICT (information and communication technologies) and increasing use of ICT can significantly increase greenhouse gas (GHG, a term used to denote emission of harmful gases in the atmosphere) emissions. Sustainable development, and more importantly environmental sustainability, has become a major area of concern of various national and international bodies and as a result various initiatives and measures are being proposed for reducing the environmental impact of industries, businesses, governments and institutions. Research also shows that appropriate use of ICT can reduce the overall GHG emissions of a business, product or service. Green IT and cloud computing can play a key role in reducing the environmental impact of ICT. This paper proposes the concept of Green IR systems and services that can play a key role in reducing the overall environmental impact of various ICT-based services in education and research, business, government, etc., that are increasingly being reliant on access and use of digital information. However, to date there has not been any systematic research towards building Green IR systems and services. This paper points out the major challenges in building Green IR systems and services, and two different methods are proposed for estimating the energy consumption, and the corresponding GHG emissions, of an IR system or service. This paper also proposes the four key enablers of a Green IR viz. Standardize, Share, Reuse and Green behavior. Further research required to achieve these for building Green IR systems and services are also mentioned.
  3. Chowdhury, G.: Building environmentally sustainable information services : a green is research agenda (2012) 0.00
    0.0029427784 = product of:
      0.011771114 = sum of:
        0.011771114 = product of:
          0.047084454 = sum of:
            0.047084454 = weight(_text_:based in 42) [ClassicSimilarity], result of:
              0.047084454 = score(doc=42,freq=8.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.33289194 = fieldWeight in 42, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=42)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Climate change has become a major area of concern over the past few years and consequently many governments, international bodies, businesses, and institutions are taking measures to reduce their carbon footprint. However, to date very little research has taken place on information and sustainable development in general, and on the environmental impact of information services in particular. Based on the data collected from various research papers and reports, this review article shows that information systems and services for the higher education and research sector currently generate massive greenhouse gas (GHG) emissions, and it is argued that there is an urgent need for developing a green information service, or green IS in short, that should be based on minimum GHG emissions throughout its lifecycle, from content creation to distribution, access, use, and disposal. Based on an analysis of the current research on green information technology (IT), it is proposed that a green IS should be based on the model of cloud computing. Finally, a research agenda is proposed that will pave the way for building and managing green ISs to support education and research/scholarly activities.
  4. Nguyen, S.-H.; Chowdhury, G.: Interpreting the knowledge map of digital library research (1990-2010) (2013) 0.00
    0.0017656671 = product of:
      0.0070626684 = sum of:
        0.0070626684 = product of:
          0.028250674 = sum of:
            0.028250674 = weight(_text_:based in 958) [ClassicSimilarity], result of:
              0.028250674 = score(doc=958,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19973516 = fieldWeight in 958, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=958)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    A knowledge map of digital library (DL) research shows the semantic organization of DL research topics and also the evolution of the field. The research reported in this article aims to find the core topics and subtopics of DL research in order to build a knowledge map of the DL domain. The methodology is comprised of a four-step research process, and two knowledge organization methods (classification and thesaurus building) were used. A knowledge map covering 21 core topics and 1,015 subtopics of DL research was created and provides a systematic overview of DL research during the last two decades (1990-2010). We argue that the map can work as a knowledge platform to guide, evaluate, and improve the activities of DL research, education, and practices. Moreover, it can be transformed into a DL ontology for various applications. The research methodology can be used to map any human knowledge domain; it is a novel and scientific method for producing comprehensive and systematic knowledge maps based on literary warrant.