Search (687 results, page 2 of 35)

  • × year_i:[2010 TO 2020}
  1. Anguiano Peña, G.; Naumis Peña, C.: Method for selecting specialized terms from a general language corpus (2015) 0.03
    0.033471715 = product of:
      0.13388686 = sum of:
        0.13388686 = weight(_text_:assisted in 2196) [ClassicSimilarity], result of:
          0.13388686 = score(doc=2196,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.44781366 = fieldWeight in 2196, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.046875 = fieldNorm(doc=2196)
      0.25 = coord(1/4)
    
    Abstract
    Among the many aspects studied by library and information science are linguistic phenomena associated with document content analysis, for purposes of both information organization and retrieval. To this end, terms used in scientific and technical language must be recovered and their area of domain and behavior studied. Through language, society controls the knowledge available to people. Document content analysis, in this case of scientific texts, facilitates gathering knowledge of lexical units and their major applications and separating such specialized terms from the general language, to create indexing languages. The model presented here or other lexicographic resources with similar characteristics may be useful in the near future, in computer-assisted indexing or as corpora monitors, with respect to new text analyses or specialized corpora. Thus, using techniques for document content analysis of a lexicographically labeled general language corpus proposed herein, components which enable the extraction of lexical units from specialized language may be obtained and characterized.
  2. Abdi, A.; Idris, N.; Alguliev, R.M.; Aliguliyev, R.M.: Automatic summarization assessment through a combination of semantic and syntactic information for intelligent educational systems (2015) 0.03
    0.033471715 = product of:
      0.13388686 = sum of:
        0.13388686 = weight(_text_:assisted in 2681) [ClassicSimilarity], result of:
          0.13388686 = score(doc=2681,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.44781366 = fieldWeight in 2681, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.046875 = fieldNorm(doc=2681)
      0.25 = coord(1/4)
    
    Abstract
    Summary writing is a process for creating a short version of a source text. It can be used as a measure of understanding. As grading students' summaries is a very time-consuming task, computer-assisted assessment can help teachers perform the grading more effectively. Several techniques, such as BLEU, ROUGE, N-gram co-occurrence, Latent Semantic Analysis (LSA), LSA_Ngram and LSA_ERB, have been proposed to support the automatic assessment of students' summaries. Since these techniques are more suitable for long texts, their performance is not satisfactory for the evaluation of short summaries. This paper proposes a specialized method that works well in assessing short summaries. Our proposed method integrates the semantic relations between words, and their syntactic composition. As a result, the proposed method is able to obtain high accuracy and improve the performance compared with the current techniques. Experiments have displayed that it is to be preferred over the existing techniques. A summary evaluation system based on the proposed method has also been developed.
  3. Mitchell, J.S.; Panzer, M.: Dewey linked data : Making connections with old friends and new acquaintances (2012) 0.03
    0.027893096 = product of:
      0.111572385 = sum of:
        0.111572385 = weight(_text_:assisted in 305) [ClassicSimilarity], result of:
          0.111572385 = score(doc=305,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.37317806 = fieldWeight in 305, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=305)
      0.25 = coord(1/4)
    
    Abstract
    This paper explores the history, uses cases, and future plans associated with availability of the Dewey Decimal Classification (DDC) system as linked data. Parts of the Dewey Decimal Classification (DDC) system have been available as linked data since 2009. Initial efforts included the DDC Summaries (the top three levels of the DDC) in eleven languages exposed as linked data in dewey.info. In 2010, the content of dewey.info was further extended by the addition of assignable numbers and captions from the Abridged Edition 14 data files in English, Italian, and Vietnamese. During 2012, we will add assignable numbers and captions from the latest full edition database, DDC 23. In addition to the "old friends" of different Dewey language versions, institutions such as the British Library and Deutsche Nationalbibliothek have made use of Dewey linked data in bibliographic records and authority files, and AGROVOC has linked to our data at a general level. We expect to extend our linked data network shortly to "new acquaintances" such as GeoNames, ISO 639-3 language codes, and Mathematics Subject Classification. In particular, we will examine the linking process to GeoNames as an example of cross-domain vocabulary alignment. In addition to linking plans, we report on use cases that facilitate machine-assisted categorization and support discovery in the Semantic Web environment.
  4. Kempf, A.O.; Ritze, D.; Eckert, K.; Zapilko, B.: New ways of mapping knowledge organization systems : using a semi-automatic matching procedure for building up vocabulary crosswalks (2014) 0.03
    0.027893096 = product of:
      0.111572385 = sum of:
        0.111572385 = weight(_text_:assisted in 1371) [ClassicSimilarity], result of:
          0.111572385 = score(doc=1371,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.37317806 = fieldWeight in 1371, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1371)
      0.25 = coord(1/4)
    
    Abstract
    Crosswalks between different vocabularies are an indispensable prerequisite for integrated, high-quality search scenarios in distributed data environments where more than one controlled vocabulary is in use. Offered through the web and linked with each other they act as a central link so that users can move back and forth between different online data sources. In the past, crosswalks between different thesauri have usually been developed manually. In the long run the intellectual updating of such crosswalks is expensive. An obvious solution would be to apply automatic matching procedures, such as the so-called ontology matching tools. On the basis of computer-generated correspondences between the Thesaurus for the Social Sciences (TSS) and the Thesaurus for Economics (STW), our contribution explores the trade-off between IT-assisted tools and procedures on the one hand and external quality evaluation by domain experts on the other hand. This paper presents techniques for semi-automatic development and maintenance of vocabulary crosswalks. The performance of multiple matching tools was first evaluated against a reference set of correct mappings, then the tools were used to generate new mappings. It was concluded that the ontology matching tools can be used effectively to speed up the work of domain experts. By optimizing the workflow, the method promises to facilitate sustained updating of high-quality vocabulary crosswalks.
  5. Li, Y.; Xu, S.; Luo, X.; Lin, S.: ¬A new algorithm for product image search based on salient edge characterization (2014) 0.03
    0.027893096 = product of:
      0.111572385 = sum of:
        0.111572385 = weight(_text_:assisted in 1552) [ClassicSimilarity], result of:
          0.111572385 = score(doc=1552,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.37317806 = fieldWeight in 1552, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1552)
      0.25 = coord(1/4)
    
    Abstract
    Visually assisted product image search has gained increasing popularity because of its capability to greatly improve end users' e-commerce shopping experiences. Different from general-purpose content-based image retrieval (CBIR) applications, the specific goal of product image search is to retrieve and rank relevant products from a large-scale product database to visually assist a user's online shopping experience. In this paper, we explore the problem of product image search through salient edge characterization and analysis, for which we propose a novel image search method coupled with an interactive user region-of-interest indication function. Given a product image, the proposed approach first extracts an edge map, based on which contour curves are further extracted. We then segment the extracted contours into fragments according to the detected contour corners. After that, a set of salient edge elements is extracted from each product image. Based on salient edge elements matching and similarity evaluation, the method derives a new pairwise image similarity estimate. Using the new image similarity, we can then retrieve product images. To evaluate the performance of our algorithm, we conducted 120 sessions of querying experiments on a data set comprised of around 13k product images collected from multiple, real-world e-commerce websites. We compared the performance of the proposed method with that of a bag-of-words method (Philbin, Chum, Isard, Sivic, & Zisserman, 2008) and a Pyramid Histogram of Orientated Gradients (PHOG) method (Bosch, Zisserman, & Munoz, 2007). Experimental results demonstrate that the proposed method improves the performance of example-based product image retrieval.
  6. Golub, K.; Soergel, D.; Buchanan, G.; Tudhope, D.; Lykke, M.; Hiom, D.: ¬A framework for evaluating automatic indexing or classification in the context of retrieval (2016) 0.03
    0.027893096 = product of:
      0.111572385 = sum of:
        0.111572385 = weight(_text_:assisted in 3311) [ClassicSimilarity], result of:
          0.111572385 = score(doc=3311,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.37317806 = fieldWeight in 3311, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3311)
      0.25 = coord(1/4)
    
    Abstract
    Tools for automatic subject assignment help deal with scale and sustainability in creating and enriching metadata, establishing more connections across and between resources and enhancing consistency. Although some software vendors and experimental researchers claim the tools can replace manual subject indexing, hard scientific evidence of their performance in operating information environments is scarce. A major reason for this is that research is usually conducted in laboratory conditions, excluding the complexities of real-life systems and situations. The article reviews and discusses issues with existing evaluation approaches such as problems of aboutness and relevance assessments, implying the need to use more than a single "gold standard" method when evaluating indexing and retrieval, and proposes a comprehensive evaluation framework. The framework is informed by a systematic review of the literature on evaluation approaches: evaluating indexing quality directly through assessment by an evaluator or through comparison with a gold standard, evaluating the quality of computer-assisted indexing directly in the context of an indexing workflow, and evaluating indexing quality indirectly through analyzing retrieval performance.
  7. ¬The Computer Science Ontology (CSO) (2018) 0.03
    0.027893096 = product of:
      0.111572385 = sum of:
        0.111572385 = weight(_text_:assisted in 4429) [ClassicSimilarity], result of:
          0.111572385 = score(doc=4429,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.37317806 = fieldWeight in 4429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4429)
      0.25 = coord(1/4)
    
    Abstract
    The Computer Science Ontology (CSO) is a large-scale ontology of research areas that was automatically generated using the Klink-2 algorithm on the Rexplore dataset, which consists of about 16 million publications, mainly in the field of Computer Science. The Klink-2 algorithm combines semantic technologies, machine learning, and knowledge from external sources to automatically generate a fully populated ontology of research areas. Some relationships were also revised manually by experts during the preparation of two ontology-assisted surveys in the field of Semantic Web and Software Architecture. The main root of CSO is Computer Science, however, the ontology includes also a few secondary roots, such as Linguistics, Geometry, Semantics, and so on. CSO presents two main advantages over manually crafted categorisations used in Computer Science (e.g., 2012 ACM Classification, Microsoft Academic Search Classification). First, it can characterise higher-level research areas by means of hundreds of sub-topics and related terms, which enables to map very specific terms to higher-level research areas. Secondly, it can be easily updated by running Klink-2 on a set of new publications. A more comprehensive discussion of the advantages of adopting an automatically generated ontology in the scholarly domain can be found in.
  8. Yaco, S.; Ramaprasad, A.: Informatics for cultural heritage instruction : an ontological framework (2019) 0.02
    0.024070604 = product of:
      0.096282415 = sum of:
        0.096282415 = product of:
          0.19256483 = sum of:
            0.19256483 = weight(_text_:instruction in 5029) [ClassicSimilarity], result of:
              0.19256483 = score(doc=5029,freq=10.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.73310935 = fieldWeight in 5029, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5029)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to suggest a framework that creates a common language to enhance the connection between the domains of cultural heritage (CH) artifacts and instruction. Design/methodology/approach The CH and instruction domains are logically deconstructed into dimensions of functions, semiotics, CH, teaching/instructional materials, agents and outcomes. The elements within those dimensions can be concatenated to create natural-English sentences that describe aspects of the problem domain. Findings The framework is valid using traditional social sciences content, semantic, practical and systemic validity constructs. Research limitations/implications The framework can be used to map current research literature to discover areas of heavy, light and no research. Originality/value The framework provides a new way for CH and education stakeholders to describe and visualize the problem domain, which could allow for significant enhancements of each. Better understanding the problem domain would serve to enhance instruction informed from collections and vice versa. The educational process would have more depth due to better access to primary sources. Increased use of collections would reveal more ways through which they could be used in instruction. The framework can help visualize the past and present of the domain, and envisage its future.
  9. Ramisch, C.: Multiword expressions acquisition : a generic and open framework (2015) 0.02
    0.022314476 = product of:
      0.0892579 = sum of:
        0.0892579 = weight(_text_:assisted in 1649) [ClassicSimilarity], result of:
          0.0892579 = score(doc=1649,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.29854244 = fieldWeight in 1649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.03125 = fieldNorm(doc=1649)
      0.25 = coord(1/4)
    
    Abstract
    This book is an excellent introduction to multiword expressions. It provides a unique, comprehensive and up-to-date overview of this exciting topic in computational linguistics. The first part describes the diversity and richness of multiword expressions, including many examples in several languages. These constructions are not only complex and arbitrary, but also much more frequent than one would guess, making them a real nightmare for natural language processing applications. The second part introduces a new generic framework for automatic acquisition of multiword expressions from texts. Furthermore, it describes the accompanying free software tool, the mwetoolkit, which comes in handy when looking for expressions in texts (regardless of the language). Evaluation is greatly emphasized, underlining the fact that results depend on parameters like corpus size, language, MWE type, etc. The last part contains solid experimental results and evaluates the mwetoolkit, demonstrating its usefulness for computer-assisted lexicography and machine translation. This is the first book to cover the whole pipeline of multiword expression acquisition in a single volume. It is addresses the needs of students and researchers in computational and theoretical linguistics, cognitive sciences, artificial intelligence and computer science. Its good balance between computational and linguistic views make it the perfect starting point for anyone interested in multiword expressions, language and text processing in general.
  10. Detlor, B.; Julien, H.; Willson, R.; Serenko, A.; Lavallee, M.: Learning outcomes of information literacy instruction at business schools (2011) 0.02
    0.018268304 = product of:
      0.073073216 = sum of:
        0.073073216 = product of:
          0.14614643 = sum of:
            0.14614643 = weight(_text_:instruction in 4356) [ClassicSimilarity], result of:
              0.14614643 = score(doc=4356,freq=4.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.5563909 = fieldWeight in 4356, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4356)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper reports results from an exploratory study investigating the factors affecting student learning outcomes of information literacy instruction (ILI) given at business schools. Specifically, the potential influence of student demographics, learning environment factors, and information literacy program components on behavioral, psychological, and benefit outcomes were examined. In total, 79 interviews with library administrators, librarians, teaching faculty, and students were conducted at three business schools with varying ILI emphases and characteristics. During these interviews, participants discussed students' ILI experiences and the outcomes arising from those experiences. Data collection also involved application of a standardized information literacy testing instrument that measures student information literacy competency. Analysis yielded the generation of a new holistic theoretical model based on information literacy and educational assessment theories. The model identifies potential salient factors of the learning environment, information literacy program components, and student demographics that may affect ILI student learning outcomes. Recommendations for practice and implications for future research are also made.
  11. Hudon, M.: Teaching classification in the 21st century (2011) 0.02
    0.018268304 = product of:
      0.073073216 = sum of:
        0.073073216 = product of:
          0.14614643 = sum of:
            0.14614643 = weight(_text_:instruction in 4616) [ClassicSimilarity], result of:
              0.14614643 = score(doc=4616,freq=4.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.5563909 = fieldWeight in 4616, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4616)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Cataloguing and classification were at the core of the first librarian training programs In 2011, LIS educators continue to believe in the importance of teaching the basics of the classification process to all future information professionals. Information on classification instruction was collected through a survey of instructors in ALA-accredited LIS masters' programs. The survey was structured around issues touching several dimensions of any teaching endeavour, with an emphasis on the tools used to help students develop several types of skills involved in the classification process. This article presents quantitative data provided by respondents representing 31 distinct LIS masters' programs. We hope it can be used as foundation to pursue the examination of classification instruction in an ever changing information world.
  12. Serenko, A.; Detlor, B.; Julien, H.; Booker, L.D.: ¬A model of student learning outcomes of information literacy instruction in a business school (2012) 0.02
    0.018268304 = product of:
      0.073073216 = sum of:
        0.073073216 = product of:
          0.14614643 = sum of:
            0.14614643 = weight(_text_:instruction in 62) [ClassicSimilarity], result of:
              0.14614643 = score(doc=62,freq=4.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.5563909 = fieldWeight in 62, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.046875 = fieldNorm(doc=62)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This study presents and tests a research model of the outcomes of information literacy instruction (ILI) given to undergraduate business students. This model is based on expectation disconfirmation theory and insights garnered from a recent qualitative investigation of student learning outcomes from ILI given at three business schools. The model was tested through a web survey administered to 372 students. The model represents psychological, behavioral, and benefit outcomes as second-order molecular constructs. Results from a partial least squares (PLS) analysis reveal that expectation disconfirmation influences perceived quality and student satisfaction. These in turn affect student psychological outcomes. Further, psychological outcomes influence student behaviors, which in turn affect benefit outcomes. Based on the study's findings, several recommendations are made.
  13. Booker, L.D.; Detlor, B.; Serenko, A.: Factors affecting the adoption of online library resources by business students (2012) 0.02
    0.018268304 = product of:
      0.073073216 = sum of:
        0.073073216 = product of:
          0.14614643 = sum of:
            0.14614643 = weight(_text_:instruction in 519) [ClassicSimilarity], result of:
              0.14614643 = score(doc=519,freq=4.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.5563909 = fieldWeight in 519, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.046875 = fieldNorm(doc=519)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The overall goal of this study is to explain how information literacy instruction (ILI) influences the adoption of online library resources (OLR) by business students. A theoretical model was developed that integrates research on ILI outcomes and technology adoption. To test this model, a web-based survey, which included both closed and open-ended questions, was administered to 337 business students. Findings indicate that the ILI received by students is beneficial in the initial or early stages of OLR use; however, students quickly reach a saturation point where more instruction contributes little, if anything, to the final outcome, such as reduced OLR anxiety and increased OLR self-efficacy. Rather, it is the independent, continuous use of OLR after receiving initial, formal ILI that creates continued positive effects. Importantly, OLR self-efficacy and anxiety were found to be important antecedents to OLR adoption. OLR anxiety also partially mediates the relationship between self-efficacy and perceived ease of use. Implications for theory and practice are discussed.
  14. Smith, C.L.: Domain-independent search expertise : a description of procedural knowledge gained during guided instruction (2015) 0.02
    0.018268304 = product of:
      0.073073216 = sum of:
        0.073073216 = product of:
          0.14614643 = sum of:
            0.14614643 = weight(_text_:instruction in 2034) [ClassicSimilarity], result of:
              0.14614643 = score(doc=2034,freq=4.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.5563909 = fieldWeight in 2034, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2034)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This longitudinal study examined the search behavior of 10 students as they completed assigned exercises for an online professional course in expert searching. The research objective was to identify, describe, and hypothesize about features of the behavior that are indicative of procedural knowledge gained during guided instruction. Log-data of search interaction were coded using a conceptual framework focused on components of search practice hypothesized to organize an expert searcher's attention during search. The coded data were analyzed using a measure of pointwise mutual information and state-transition analysis. Results of the study provide important insight for future investigation of domain-independent search expertise and for the design of systems that assist searchers in gaining expertise.
  15. Sormunen, E.; Tanni, M.; Alamettälä, T.; Heinström, J.: Students' group work strategies in source-based writing assignments (2014) 0.02
    0.015223586 = product of:
      0.060894344 = sum of:
        0.060894344 = product of:
          0.12178869 = sum of:
            0.12178869 = weight(_text_:instruction in 1289) [ClassicSimilarity], result of:
              0.12178869 = score(doc=1289,freq=4.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.46365905 = fieldWeight in 1289, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Source-based writing assignments conducted by groups of students are a common learning task used in information literacy instruction. The fundamental assumption in group assignments is that students' collaboration substantially enhances their learning. The present study focused on the group work strategies adopted by upper secondary school students in source-based writing assignments. Seventeen groups authored Wikipedia or Wikipedia-style articles and were interviewed during and after the assignment. Group work strategies were analyzed in 6 activities: planning, searching, assessing sources, reading, writing, and editing. The students used 2 cooperative strategies: delegation and division of work, and 2 collaborative strategies: pair and group collaboration. Division of work into independently conducted parts was the most popular group work strategy. Also group collaboration, where students worked together to complete an activity, was commonly applied. Division of work was justified by efficiency in completing the project and by ease of control in the fair division of contributions. The motivation behind collaboration was related to quality issues and shared responsibility. We suggest that the present designs of learning tasks lead students to avoid collaboration, increasing the risk of low learning outcomes in information literacy instruction.
  16. Costello, K.L.; Martin III, J.D.; Brinegar, A.E.: Online disclosure of illicit information : information behaviors in two drug forums (2017) 0.02
    0.015223586 = product of:
      0.060894344 = sum of:
        0.060894344 = product of:
          0.12178869 = sum of:
            0.12178869 = weight(_text_:instruction in 3832) [ClassicSimilarity], result of:
              0.12178869 = score(doc=3832,freq=4.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.46365905 = fieldWeight in 3832, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3832)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Although people disclose illicit activities such as drug use online, we currently know little about what information people choose to disclose and share or whether there are differences in behavior depending on the illicit activity being disclosed. This exploratory mixed-methods study examines how people discuss and disclose the use of two different drugs-marijuana and opioids-on Reddit. In this study, hermeneutic content analysis is employed to describe the type of comments people make in forums dedicated to discussions about illicit drugs. With inductive analysis, seven categories of comments were identified: disclosure, instruction and advice, culture, community norms, moralizing, legality, and banter. Our subsequent quantitative analysis indicates that although the amounts of disclosure are similar in each subreddit, there are more instances of instruction and advice in discussions about opiates, and more examples of banter in comments about marijuana use. In fact, both subreddits have high rates of banter. We argue that banter fosters disclosure in both subreddits, and that banter and disclosure are linked with information-seeking behaviors in online forums. This work has implications for future explorations of disclosure online and for public health interventions aimed at disseminating credible information about drug use to at-risk individuals.
  17. Schultz Jr., W.N.; Braddy, L.: ¬A librarian-centered study of perceptions of subject terms and controlled vocabulary (2017) 0.02
    0.015070582 = product of:
      0.060282327 = sum of:
        0.060282327 = product of:
          0.120564654 = sum of:
            0.120564654 = weight(_text_:instruction in 5156) [ClassicSimilarity], result of:
              0.120564654 = score(doc=5156,freq=2.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.45899904 = fieldWeight in 5156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5156)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Controlled vocabulary and subject headings in OPAC records have proven to be useful in improving search results. The authors used a survey to gather information about librarian opinions and professional use of controlled vocabulary. Data from a range of backgrounds and expertise were examined, including academic and public libraries, and technical services as well as public services professionals. Responses overall demonstrated positive opinions of the value of controlled vocabulary, including in reference interactions as well as during bibliographic instruction sessions. Results are also examined based upon factors such as age and type of librarian.
  18. Du, H.; Hao, J.-X..; Kwok, R.; Wagner, C.: Can a lean medium enhance large-group communication? : Examining the impact of interactive mobile learning (2010) 0.01
    0.012917642 = product of:
      0.051670566 = sum of:
        0.051670566 = product of:
          0.10334113 = sum of:
            0.10334113 = weight(_text_:instruction in 4003) [ClassicSimilarity], result of:
              0.10334113 = score(doc=4003,freq=2.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.39342776 = fieldWeight in 4003, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4003)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This research empirically evaluated the use of mobile information and communication technology in a large-sized undergraduate class, where the effectiveness of multilearner participation and prompt learner-instructor interaction is often challenged. The authors analyzed the effectiveness of a so-called "lean" communication medium using hand-held mobile devices, whose brief text-based messages considerably limit the speed of information exchange. Adopting a social construction perspective of media richness theory and a reinforced approach to learning and practice, the authors conjectured that an interactive learning system built with wireless PDA devices can enhance individual practices and reinforce peer influences. Consequently, they expected better understanding and higher satisfaction among learners. A field experiment with 118 participants in the treatment and 114 participants in the control group supported their hypotheses. Their results suggested that richness of a "lean" medium could be increased in certain socially constructed conditions, thus extending existing notions of computer-aided instruction towards a techno-social learning model.
  19. Gemberling, T.: Thema and FRBR's third group (2010) 0.01
    0.012917642 = product of:
      0.051670566 = sum of:
        0.051670566 = product of:
          0.10334113 = sum of:
            0.10334113 = weight(_text_:instruction in 4158) [ClassicSimilarity], result of:
              0.10334113 = score(doc=4158,freq=2.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.39342776 = fieldWeight in 4158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4158)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The treatment of subjects by Functional Requirements for Bibliographic Records (FRBR) has attracted less attention than some of its other aspects, but there seems to be a general consensus that it needs work. While some have proposed elaborating its subject categories-concepts, objects, events, and places-to increase their semantic complexity, a working group of the International Federation of Library Associations and Institutions (IFLA) has recently made a promising proposal that essentially bypasses those categories in favor of one entity, thema. This article gives an overview of the proposal and discusses its relevance to another difficult problem, ambiguities in the establishment of headings for buildings.Use of dynamic links from subject-based finding aids to records for electronic resources in the OPAC is suggested as one method for by-passing the OPAC search interface, thus making the library's electronic resources more accessible. This method simplifies maintenance of links to electronic resources and aids instruction by providing a single, consistent access point to them. Results of a usage study from before and after this project was completed show a consistent, often dramatic increase in use of the library's electronic resources.
  20. Danskin, A.: Linked and open data : RDA and bibliographic control (2012) 0.01
    0.012917642 = product of:
      0.051670566 = sum of:
        0.051670566 = product of:
          0.10334113 = sum of:
            0.10334113 = weight(_text_:instruction in 304) [ClassicSimilarity], result of:
              0.10334113 = score(doc=304,freq=2.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.39342776 = fieldWeight in 304, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.046875 = fieldNorm(doc=304)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    RDA: Resource Description and Access is a new cataloguing standard which will replace the Anglo-American Cataloguing Rules, 2nd edition, which has been widely used in libraries since 1981. RDA, like AACR2, is a content standard providing guidance and instruction on how to identify and record attributes or properties of resources which are significant for discovery. However, RDA is also an implementation of the FRBR and FRAD models. The RDA element set and vocabularies are being published on the Open Metadata Registry as linked open data. RDA provides a rich vocabulary for the description of resources and for expressing relationships between them. This paper describes what RDA offers and considers the challenges and potential of linked open data in the broader framework of bibliographic control.

Languages

  • e 500
  • d 179
  • a 1
  • hu 1
  • More… Less…

Types

  • a 597
  • el 63
  • m 45
  • s 15
  • x 12
  • r 7
  • b 5
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications