Search (927 results, page 1 of 47)

  • × year_i:[2010 TO 2020}
  1. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.09
    0.09152177 = product of:
      0.18304354 = sum of:
        0.18304354 = sum of:
          0.113330126 = weight(_text_:learning in 2748) [ClassicSimilarity], result of:
            0.113330126 = score(doc=2748,freq=2.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.49330387 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.078125 = fieldNorm(doc=2748)
          0.06971342 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
            0.06971342 = score(doc=2748,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.38690117 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2748)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  2. Soergel, D.: Knowledge organization for learning (2014) 0.09
    0.090601936 = product of:
      0.18120387 = sum of:
        0.18120387 = sum of:
          0.1121911 = weight(_text_:learning in 1400) [ClassicSimilarity], result of:
            0.1121911 = score(doc=1400,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.48834592 = fieldWeight in 1400, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1400)
          0.06901277 = weight(_text_:22 in 1400) [ClassicSimilarity], result of:
            0.06901277 = score(doc=1400,freq=4.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.38301262 = fieldWeight in 1400, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1400)
      0.5 = coord(1/2)
    
    Abstract
    This paper discusses and illustrates through examples how meaningful or deep learning can be supported through well-structured presentation of material, through giving learners schemas they can use to organize knowledge in their minds, and through helping learners to understand knowledge organization principles they can use to construct their own schemas. It is a call to all authors, educators and information designers to pay attention to meaningful presentation that expresses the internal structure of the domain and facilitates the learner's assimilation of concepts and their relationships.
    Pages
    S.22-32
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  3. Kaminska, A.; Pulak, I.: Knowledge organization in a digital learning environment in the experiences of pedagogy students (2014) 0.09
    0.08891211 = product of:
      0.17782421 = sum of:
        0.17782421 = sum of:
          0.13599616 = weight(_text_:learning in 1469) [ClassicSimilarity], result of:
            0.13599616 = score(doc=1469,freq=8.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.59196466 = fieldWeight in 1469, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.046875 = fieldNorm(doc=1469)
          0.04182805 = weight(_text_:22 in 1469) [ClassicSimilarity], result of:
            0.04182805 = score(doc=1469,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.23214069 = fieldWeight in 1469, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1469)
      0.5 = coord(1/2)
    
    Abstract
    The results of diagnostic survey showing the way in which the students of pedagogy create and organize their digital personal environment, used in individual learning process were presented in the paper. 272 students of Cracow schools were covered by the survey. It has been analyzed the sources of information they mostly used, ways of storage, organizing and aggregating of information and the tools used for this purpose. The ability to design and build a digital personal learning environment (PLE) is in today's world a very important element of lifelong learning and enables efficient functioning in the information society.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  4. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.09
    0.08653661 = sum of:
      0.054482006 = product of:
        0.16344601 = sum of:
          0.16344601 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
            0.16344601 = score(doc=5820,freq=2.0), product of:
              0.4362298 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05145426 = queryNorm
              0.3746787 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.33333334 = coord(1/3)
      0.032054603 = product of:
        0.064109206 = sum of:
          0.064109206 = weight(_text_:learning in 5820) [ClassicSimilarity], result of:
            0.064109206 = score(doc=5820,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.27905482 = fieldWeight in 5820, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.5 = coord(1/2)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  5. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.08
    0.081723005 = product of:
      0.16344601 = sum of:
        0.16344601 = product of:
          0.49033803 = sum of:
            0.49033803 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.49033803 = score(doc=973,freq=2.0), product of:
                0.4362298 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05145426 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  6. Chianese, A.; Cantone, F.; Caropreso, M.; Moscato, V.: ARCHAEOLOGY 2.0 : Cultural E-Learning tools and distributed repositories supported by SEMANTICA, a System for Learning Object Retrieval and Adaptive Courseware Generation for e-learning environments. (2010) 0.08
    0.080781825 = product of:
      0.16156365 = sum of:
        0.16156365 = sum of:
          0.12670694 = weight(_text_:learning in 3733) [ClassicSimilarity], result of:
            0.12670694 = score(doc=3733,freq=10.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.55153054 = fieldWeight in 3733, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3733)
          0.03485671 = weight(_text_:22 in 3733) [ClassicSimilarity], result of:
            0.03485671 = score(doc=3733,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.19345059 = fieldWeight in 3733, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3733)
      0.5 = coord(1/2)
    
    Abstract
    The focus of the present research has been the development and the application to Virtual Archaeology of a Web-Based framework for Learning Objects indexing and retrieval. The paper presents the main outcomes of a experimentation carried out by an interdisciplinary group of Federico II University of Naples. Our equipe is composed by researchers both in ICT and in Humanities disciplines, in particular in the domain of Virtual Archaeology and Cultural Heritage Informatics in order to develop specific ICT methodological approaches to Virtual Archaeology. The methodological background is the progressive diffusion of Web 2.0 technologies and the attempt to analyze their impact and perspectives in the Cultural Heritage field. In particular, we approached the specific requirements of the so called Learning 2.0, and the possibility to improve the automation of modular courseware generation in Virtual Archaeology Didactics. The developed framework was called SEMANTICA, and it was applied to Virtual Archaeology Domain Ontologies in order to generate a didactic course in a semi-automated way. The main results of this test and the first students feedback on the course fruition will be presented and discussed..
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  7. Snow, K.; Hoffman, G.L.: What makes an effective cataloging course? : a study of the factors that promote learning (2015) 0.08
    0.080495246 = product of:
      0.16099049 = sum of:
        0.16099049 = sum of:
          0.1121911 = weight(_text_:learning in 2609) [ClassicSimilarity], result of:
            0.1121911 = score(doc=2609,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.48834592 = fieldWeight in 2609, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2609)
          0.04879939 = weight(_text_:22 in 2609) [ClassicSimilarity], result of:
            0.04879939 = score(doc=2609,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.2708308 = fieldWeight in 2609, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2609)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents the results of a research study, a survey of library and information science master's degree holders who have taken a beginning cataloging course, to identify the elements of a beginning cataloging course that help students to learn cataloging concepts and skills. The results suggest that cataloging practice (the hands-on creation of bibliographic records or catalog cards), the effectiveness of the instructor, a balance of theory and practice, and placing cataloging in a real-world context contribute to effective learning. However, more research is needed to determine how, and to what the extent, each element should be incorporated into beginning cataloging courses.
    Date
    10. 9.2000 17:38:22
  8. Tang, X.; Chen, L.; Cui, J.; Wei, B.: Knowledge representation learning with entity descriptions, hierarchical types, and textual relations (2019) 0.08
    0.079802096 = product of:
      0.15960419 = sum of:
        0.15960419 = sum of:
          0.11777613 = weight(_text_:learning in 5101) [ClassicSimilarity], result of:
            0.11777613 = score(doc=5101,freq=6.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.51265645 = fieldWeight in 5101, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.046875 = fieldNorm(doc=5101)
          0.04182805 = weight(_text_:22 in 5101) [ClassicSimilarity], result of:
            0.04182805 = score(doc=5101,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.23214069 = fieldWeight in 5101, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5101)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge representation learning methods usually only utilize triple facts, or just consider one kind of extra information. In this paper, we propose a multi-source knowledge representation learning (MKRL) model, which can combine entity descriptions, hierarchical types, and textual relations with triple facts. Specifically, for entity descriptions, a convolutional neural network is used to get representations. For hierarchical type, weighted hierarchy encoders are used to construct the projection matrixes of hierarchical types, and the projection matrix of an entity combines all hierarchical type projection matrixes of the entity with the relation-specific type constrains. For textual relations, a sentence-level attention mechanism is employed to get representations. We evaluate MKRL model on knowledge graph completion task with dataset FB15k-237, and experimental results demonstrate that our model outperforms the state-of-the-art methods, which indicates the effectiveness of multi-source information for knowledge representation.
    Date
    17. 3.2019 13:22:53
  9. Isah, E.E.; Byström, K.: Physicians' learning at work through everyday access to information (2016) 0.07
    0.07409342 = product of:
      0.14818683 = sum of:
        0.14818683 = sum of:
          0.113330126 = weight(_text_:learning in 2641) [ClassicSimilarity], result of:
            0.113330126 = score(doc=2641,freq=8.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.49330387 = fieldWeight in 2641, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2641)
          0.03485671 = weight(_text_:22 in 2641) [ClassicSimilarity], result of:
            0.03485671 = score(doc=2641,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.19345059 = fieldWeight in 2641, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2641)
      0.5 = coord(1/2)
    
    Abstract
    This article explores access to information through an analysis of sources and strategies as part of workplace learning in a medical context in an African developing country. It focuses on information practices in everyday patient care by a team of senior and junior physicians in a university teaching hospital. A practice-oriented, interpretative case study approach, in which elements from activity theory, situated learning theory, and communities of practice framework, was developed to form the theoretical basis for the study. The qualitative data from observations and interviews were analyzed with iterative coding techniques. The findings reveal that physicians' learning through everyday access to medical information is enacted by, embedded in, and sustained as a part of the work activity itself. The findings indicate a stable community of practice with traits of both local and general medical conventions, in which the value of used sources and strategies remains relatively uncontested, strongly based on formally and informally sanctioned and legitimized practices. Although the present study is particular and context specific, the results indicate a more generally plausible conclusion; the complementary nature of different information sources and strategies underscores that access to information happens in a context in which solitary sources alone make little difference.
    Date
    22. 1.2016 12:31:37
  10. Devaul, H.; Diekema, A.R.; Ostwald, J.: Computer-assisted assignment of educational standards using natural language processing (2011) 0.07
    0.06899593 = product of:
      0.13799186 = sum of:
        0.13799186 = sum of:
          0.09616381 = weight(_text_:learning in 4199) [ClassicSimilarity], result of:
            0.09616381 = score(doc=4199,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.41858223 = fieldWeight in 4199, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.046875 = fieldNorm(doc=4199)
          0.04182805 = weight(_text_:22 in 4199) [ClassicSimilarity], result of:
            0.04182805 = score(doc=4199,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.23214069 = fieldWeight in 4199, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4199)
      0.5 = coord(1/2)
    
    Abstract
    Educational standards are a central focus of the current educational system in the United States, underpinning educational practice, curriculum design, teacher professional development, and high-stakes testing and assessment. Digital library users have requested that this information be accessible in association with digital learning resources to support teaching and learning as well as accountability requirements. Providing this information is complex because of the variability and number of standards documents in use at the national, state, and local level. This article describes a cataloging tool that aids catalogers in the assignment of standards metadata to digital library resources, using natural language processing techniques. The research explores whether the standards suggestor service would suggest the same standards as a human, whether relevant standards are ranked appropriately in the result set, and whether the relevance of the suggested assignments improve when, in addition to resource content, metadata is included in the query to the cataloging tool. The article also discusses how this service might streamline the cataloging workflow.
    Date
    22. 1.2011 14:25:32
  11. Hudon, M.: KO and classification education in the light of Benjamin Bloom's Taxonomy of learning objectives (2014) 0.07
    0.06899593 = product of:
      0.13799186 = sum of:
        0.13799186 = sum of:
          0.09616381 = weight(_text_:learning in 1468) [ClassicSimilarity], result of:
            0.09616381 = score(doc=1468,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.41858223 = fieldWeight in 1468, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.046875 = fieldNorm(doc=1468)
          0.04182805 = weight(_text_:22 in 1468) [ClassicSimilarity], result of:
            0.04182805 = score(doc=1468,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.23214069 = fieldWeight in 1468, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1468)
      0.5 = coord(1/2)
    
    Abstract
    In a research project focusing on knowledge organization and classification education, 407 learning objectives proposed in courses entirely or partially dedicated to these subjects in North American Library and Information Science programs were categorized with the help of the Benjamin Bloom's Taxonomy of cognitive objectives. The analysis reveals that the vast majority of course objectives remain at the lower levels of the Taxonomy. These results tend to reinforce observations made over the past 30 years in relation to KO and classification education. While KO and classification educators recognize the necessity for students to develop high-level analytic and evaluative skills, there are few references to those skills in current course objectives.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  12. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.06810251 = product of:
      0.13620502 = sum of:
        0.13620502 = product of:
          0.40861505 = sum of:
            0.40861505 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.40861505 = score(doc=1826,freq=2.0), product of:
                0.4362298 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05145426 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  13. Brandão, W.C.; Santos, R.L.T.; Ziviani, N.; Moura, E.S. de; Silva, A.S. da: Learning to expand queries using entities (2014) 0.07
    0.066501744 = product of:
      0.13300349 = sum of:
        0.13300349 = sum of:
          0.09814678 = weight(_text_:learning in 1343) [ClassicSimilarity], result of:
            0.09814678 = score(doc=1343,freq=6.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.42721373 = fieldWeight in 1343, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1343)
          0.03485671 = weight(_text_:22 in 1343) [ClassicSimilarity], result of:
            0.03485671 = score(doc=1343,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.19345059 = fieldWeight in 1343, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1343)
      0.5 = coord(1/2)
    
    Abstract
    A substantial fraction of web search queries contain references to entities, such as persons, organizations, and locations. Recently, methods that exploit named entities have been shown to be more effective for query expansion than traditional pseudorelevance feedback methods. In this article, we introduce a supervised learning approach that exploits named entities for query expansion using Wikipedia as a repository of high-quality feedback documents. In contrast with existing entity-oriented pseudorelevance feedback approaches, we tackle query expansion as a learning-to-rank problem. As a result, not only do we select effective expansion terms but we also weigh these terms according to their predicted effectiveness. To this end, we exploit the rich structure of Wikipedia articles to devise discriminative term features, including each candidate term's proximity to the original query terms, as well as its frequency across multiple article fields and in category and infobox descriptors. Experiments on three Text REtrieval Conference web test collections attest the effectiveness of our approach, with gains of up to 23.32% in terms of mean average precision, 19.49% in terms of precision at 10, and 7.86% in terms of normalized discounted cumulative gain compared with a state-of-the-art approach for entity-oriented query expansion.
    Date
    22. 8.2014 17:07:50
  14. Soergel, D.: Unleashing the power of data through organization : structure and connections for meaning, learning and discovery (2015) 0.07
    0.066501744 = product of:
      0.13300349 = sum of:
        0.13300349 = sum of:
          0.09814678 = weight(_text_:learning in 2376) [ClassicSimilarity], result of:
            0.09814678 = score(doc=2376,freq=6.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.42721373 = fieldWeight in 2376, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2376)
          0.03485671 = weight(_text_:22 in 2376) [ClassicSimilarity], result of:
            0.03485671 = score(doc=2376,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.19345059 = fieldWeight in 2376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2376)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge organization is needed everywhere. Its importance is marked by its pervasiveness. This paper will show many areas, tasks, and functions where proper use of knowledge organization, construed as broadly as the term implies, provides support for learning and understanding, for sense making and meaning making, for inference, and for discovery by people and computer programs and thereby will make the world a better place. The paper focuses not on metadata but rather on structuring and representing the actual data or knowledge itself and argues for more communication between the largely separated KO, ontology, data modeling, and semantic web communities to address the many problems that need better solutions. In particular, the paper discusses the application of knowledge organization in knowledge bases for question answering and cognitive systems, knowledge bases for information extraction from text or multimedia, linked data, big data and data analytics, electronic health records as one example, influence diagrams (causal maps), dynamic system models, process diagrams, concept maps, and other node-link diagrams, information systems in organizations, knowledge organization for understanding and learning, and knowledge transfer between domains. The paper argues for moving beyond triples to a more powerful representation using entities and multi-way relationships but not attributes.
    Date
    27.11.2015 20:52:22
  15. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.06
    0.06406524 = product of:
      0.12813048 = sum of:
        0.12813048 = sum of:
          0.07933109 = weight(_text_:learning in 2606) [ClassicSimilarity], result of:
            0.07933109 = score(doc=2606,freq=2.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.3453127 = fieldWeight in 2606, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2606)
          0.04879939 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
            0.04879939 = score(doc=2606,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.2708308 = fieldWeight in 2606, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2606)
      0.5 = coord(1/2)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
  16. Thelwall, M.; Buckley, K.; Paltoglou, G.; Cai, D.; Kappas, A.: Sentiment strength detection in short informal text (2010) 0.06
    0.057496607 = product of:
      0.114993215 = sum of:
        0.114993215 = sum of:
          0.08013651 = weight(_text_:learning in 4200) [ClassicSimilarity], result of:
            0.08013651 = score(doc=4200,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.34881854 = fieldWeight in 4200, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4200)
          0.03485671 = weight(_text_:22 in 4200) [ClassicSimilarity], result of:
            0.03485671 = score(doc=4200,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.19345059 = fieldWeight in 4200, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4200)
      0.5 = coord(1/2)
    
    Abstract
    A huge number of informal messages are posted every day in social network sites, blogs, and discussion forums. Emotions seem to be frequently important in these texts for expressing friendship, showing social support or as part of online arguments. Algorithms to identify sentiment and sentiment strength are needed to help understand the role of emotion in this informal communication and also to identify inappropriate or anomalous affective utterances, potentially associated with threatening behavior to the self or others. Nevertheless, existing sentiment detection algorithms tend to be commercially oriented, designed to identify opinions about products rather than user behaviors. This article partly fills this gap with a new algorithm, SentiStrength, to extract sentiment strength from informal English text, using new methods to exploit the de facto grammars and spelling styles of cyberspace. Applied to MySpace comments and with a lookup table of term sentiment strengths optimized by machine learning, SentiStrength is able to predict positive emotion with 60.6% accuracy and negative emotion with 72.8% accuracy, both based upon strength scales of 1-5. The former, but not the latter, is better than baseline and a wide range of general machine learning approaches.
    Date
    22. 1.2011 14:29:23
  17. Zhang, P.; Soergel, D.: Towards a comprehensive model of the cognitive process and mechanisms of individual sensemaking (2014) 0.06
    0.057496607 = product of:
      0.114993215 = sum of:
        0.114993215 = sum of:
          0.08013651 = weight(_text_:learning in 1344) [ClassicSimilarity], result of:
            0.08013651 = score(doc=1344,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.34881854 = fieldWeight in 1344, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1344)
          0.03485671 = weight(_text_:22 in 1344) [ClassicSimilarity], result of:
            0.03485671 = score(doc=1344,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.19345059 = fieldWeight in 1344, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1344)
      0.5 = coord(1/2)
    
    Abstract
    This review introduces a comprehensive model of the cognitive process and mechanisms of individual sensemaking to provide a theoretical basis for: - empirical studies that improve our understanding of the cognitive process and mechanisms of sensemaking and integration of results of such studies; - education in critical thinking and sensemaking skills; - the design of sensemaking assistant tools that support and guide users. The paper reviews and extends existing sensemaking models with ideas from learning and cognition. It reviews literature on sensemaking models in human-computer interaction (HCI), cognitive system engineering, organizational communication, and library and information sciences (LIS), learning theories, cognitive psychology, and task-based information seeking. The model resulting from this synthesis moves to a stronger basis for explaining sensemaking behaviors and conceptual changes. The model illustrates the iterative processes of sensemaking, extends existing models that focus on activities by integrating cognitive mechanisms and the creation of instantiated structure elements of knowledge, and different types of conceptual change to show a complete picture of the cognitive processes of sensemaking. The processes and cognitive mechanisms identified provide better foundations for knowledge creation, organization, and sharing practices and a stronger basis for design of sensemaking assistant systems and tools.
    Date
    22. 8.2014 16:55:39
  18. Khoo, C.S.G.; Teng, T.B.-R.; Ng, H.-C.; Wong, K.-P.: Developing a taxonomy to support user browsing and learning in a digital heritage portal with crowd-sourced content (2014) 0.06
    0.057496607 = product of:
      0.114993215 = sum of:
        0.114993215 = sum of:
          0.08013651 = weight(_text_:learning in 1433) [ClassicSimilarity], result of:
            0.08013651 = score(doc=1433,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.34881854 = fieldWeight in 1433, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1433)
          0.03485671 = weight(_text_:22 in 1433) [ClassicSimilarity], result of:
            0.03485671 = score(doc=1433,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.19345059 = fieldWeight in 1433, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1433)
      0.5 = coord(1/2)
    
    Abstract
    A taxonomy is being developed to organize the content of a cultural heritage portal called Singapore Memory Portal, that provides access to a collection of memory postings about Singapore's history, culture, society, life/lifestyle and landscape/architecture. The taxonomy is divided into an upper-level taxonomy to support user browsing of topics, and a lower-level taxonomy to represent the types of information available on specific topics, to support user learning and information synthesis. The initial version of the upper-level taxonomy was developed based on potential users' expectations of the content coverage of the portal. The categories are centered on the themes of daily life/lifestyle, historically significant events, disasters and crises, festivals, a variety of cultural elements and national issues. The lower-level taxonomy was derived from attributes and relations extracted from essays and mindmaps produced by coders after reading memory postings for a sample of topics.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  19. Conde, A.; Larrañaga, M.; Arruarte, A.; Elorriaga, J.A.; Roth, D.: litewi: a combined term extraction and entity linking method for eliciting educational ontologies from textbooks (2016) 0.06
    0.057496607 = product of:
      0.114993215 = sum of:
        0.114993215 = sum of:
          0.08013651 = weight(_text_:learning in 2645) [ClassicSimilarity], result of:
            0.08013651 = score(doc=2645,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.34881854 = fieldWeight in 2645, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
          0.03485671 = weight(_text_:22 in 2645) [ClassicSimilarity], result of:
            0.03485671 = score(doc=2645,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.19345059 = fieldWeight in 2645, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2645)
      0.5 = coord(1/2)
    
    Abstract
    Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains-astronomy and molecular biology.
    Date
    22. 1.2016 12:38:14
  20. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.06
    0.057496607 = product of:
      0.114993215 = sum of:
        0.114993215 = sum of:
          0.08013651 = weight(_text_:learning in 4553) [ClassicSimilarity], result of:
            0.08013651 = score(doc=4553,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.34881854 = fieldWeight in 4553, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
          0.03485671 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
            0.03485671 = score(doc=4553,freq=2.0), product of:
              0.18018405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05145426 = queryNorm
              0.19345059 = fieldWeight in 4553, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
      0.5 = coord(1/2)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01

Languages

  • e 712
  • d 204
  • i 2
  • a 1
  • hu 1
  • More… Less…

Types

  • a 803
  • el 90
  • m 71
  • s 24
  • x 14
  • r 7
  • b 5
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications