Search (2099 results, page 1 of 105)

  • × year_i:[2010 TO 2020}
  1. Weber, S.: ¬Der Angriff der Digitalgeräte auf die übrigen Lernmedien (2015) 0.10
    0.10453274 = product of:
      0.15679911 = sum of:
        0.07122652 = weight(_text_:based in 2505) [ClassicSimilarity], result of:
          0.07122652 = score(doc=2505,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.46604872 = fieldWeight in 2505, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.109375 = fieldNorm(doc=2505)
        0.08557258 = product of:
          0.17114516 = sum of:
            0.17114516 = weight(_text_:training in 2505) [ClassicSimilarity], result of:
              0.17114516 = score(doc=2505,freq=2.0), product of:
                0.23690371 = queryWeight, product of:
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.050723847 = queryNorm
                0.722425 = fieldWeight in 2505, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2505)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Theme
    Computer Based Training
  2. Devaul, H.; Diekema, A.R.; Ostwald, J.: Computer-assisted assignment of educational standards using natural language processing (2011) 0.10
    0.09673858 = product of:
      0.14510787 = sum of:
        0.03052565 = weight(_text_:based in 4199) [ClassicSimilarity], result of:
          0.03052565 = score(doc=4199,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.19973516 = fieldWeight in 4199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=4199)
        0.11458221 = sum of:
          0.073347926 = weight(_text_:training in 4199) [ClassicSimilarity], result of:
            0.073347926 = score(doc=4199,freq=2.0), product of:
              0.23690371 = queryWeight, product of:
                4.67046 = idf(docFreq=1125, maxDocs=44218)
                0.050723847 = queryNorm
              0.3096107 = fieldWeight in 4199, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.67046 = idf(docFreq=1125, maxDocs=44218)
                0.046875 = fieldNorm(doc=4199)
          0.041234285 = weight(_text_:22 in 4199) [ClassicSimilarity], result of:
            0.041234285 = score(doc=4199,freq=2.0), product of:
              0.17762627 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050723847 = queryNorm
              0.23214069 = fieldWeight in 4199, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4199)
      0.6666667 = coord(2/3)
    
    Date
    22. 1.2011 14:25:32
    Theme
    Computer Based Training
  3. Arbelaitz, O.; Martínez-Otzeta. J.M.; Muguerza, J.: User modeling in a social network for cognitively disabled people (2016) 0.10
    0.09673858 = product of:
      0.14510787 = sum of:
        0.03052565 = weight(_text_:based in 2639) [ClassicSimilarity], result of:
          0.03052565 = score(doc=2639,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.19973516 = fieldWeight in 2639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=2639)
        0.11458221 = sum of:
          0.073347926 = weight(_text_:training in 2639) [ClassicSimilarity], result of:
            0.073347926 = score(doc=2639,freq=2.0), product of:
              0.23690371 = queryWeight, product of:
                4.67046 = idf(docFreq=1125, maxDocs=44218)
                0.050723847 = queryNorm
              0.3096107 = fieldWeight in 2639, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.67046 = idf(docFreq=1125, maxDocs=44218)
                0.046875 = fieldNorm(doc=2639)
          0.041234285 = weight(_text_:22 in 2639) [ClassicSimilarity], result of:
            0.041234285 = score(doc=2639,freq=2.0), product of:
              0.17762627 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050723847 = queryNorm
              0.23214069 = fieldWeight in 2639, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2639)
      0.6666667 = coord(2/3)
    
    Abstract
    Online communities are becoming an important tool in the communication and participation processes in our society. However, the most widespread applications are difficult to use for people with disabilities, or may involve some risks if no previous training has been undertaken. This work describes a novel social network for cognitively disabled people along with a clustering-based method for modeling activity and socialization processes of its users in a noninvasive way. This closed social network is specifically designed for people with cognitive disabilities, called Guremintza, that provides the network administrators (e.g., social workers) with two types of reports: summary statistics of the network usage and behavior patterns discovered by a data mining process. Experiments made in an initial stage of the network show that the discovered patterns are meaningful to the social workers and they find them useful in monitoring the progress of the users.
    Date
    22. 1.2016 12:02:26
  4. Semantic keyword-based search on structured data sources : First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers (2016) 0.09
    0.091748565 = product of:
      0.13762285 = sum of:
        0.04984818 = weight(_text_:based in 2753) [ClassicSimilarity], result of:
          0.04984818 = score(doc=2753,freq=12.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.32616615 = fieldWeight in 2753, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=2753)
        0.08777467 = sum of:
          0.048898615 = weight(_text_:training in 2753) [ClassicSimilarity], result of:
            0.048898615 = score(doc=2753,freq=2.0), product of:
              0.23690371 = queryWeight, product of:
                4.67046 = idf(docFreq=1125, maxDocs=44218)
                0.050723847 = queryNorm
              0.20640713 = fieldWeight in 2753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.67046 = idf(docFreq=1125, maxDocs=44218)
                0.03125 = fieldNorm(doc=2753)
          0.038876057 = weight(_text_:22 in 2753) [ClassicSimilarity], result of:
            0.038876057 = score(doc=2753,freq=4.0), product of:
              0.17762627 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050723847 = queryNorm
              0.21886435 = fieldWeight in 2753, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2753)
      0.6666667 = coord(2/3)
    
    Abstract
    This book constitutes the thoroughly refereed post-conference proceedings of the First COST Action IC1302 International KEYSTONE Conference on semantic Keyword-based Search on Structured Data Sources, IKC 2015, held in Coimbra, Portugal, in September 2015. The 13 revised full papers, 3 revised short papers, and 2 invited papers were carefully reviewed and selected from 22 initial submissions. The paper topics cover techniques for keyword search, semantic data management, social Web and social media, information retrieval, benchmarking for search on big data.
    Content
    Inhalt: Professional Collaborative Information Seeking: On Traceability and Creative Sensemaking / Nürnberger, Andreas (et al.) - Recommending Web Pages Using Item-Based Collaborative Filtering Approaches / Cadegnani, Sara (et al.) - Processing Keyword Queries Under Access Limitations / Calì, Andrea (et al.) - Balanced Large Scale Knowledge Matching Using LSH Forest / Cochez, Michael (et al.) - Improving css-KNN Classification Performance by Shifts in Training Data / Draszawka, Karol (et al.) - Classification Using Various Machine Learning Methods and Combinations of Key-Phrases and Visual Features / HaCohen-Kerner, Yaakov (et al.) - Mining Workflow Repositories for Improving Fragments Reuse / Harmassi, Mariem (et al.) - AgileDBLP: A Search-Based Mobile Application for Structured Digital Libraries / Ifrim, Claudia (et al.) - Support of Part-Whole Relations in Query Answering / Kozikowski, Piotr (et al.) - Key-Phrases as Means to Estimate Birth and Death Years of Jewish Text Authors / Mughaz, Dror (et al.) - Visualization of Uncertainty in Tag Clouds / Platis, Nikos (et al.) - Multimodal Image Retrieval Based on Keywords and Low-Level Image Features / Pobar, Miran (et al.) - Toward Optimized Multimodal Concept Indexing / Rekabsaz, Navid (et al.) - Semantic URL Analytics to Support Efficient Annotation of Large Scale Web Archives / Souza, Tarcisio (et al.) - Indexing of Textual Databases Based on Lexical Resources: A Case Study for Serbian / Stankovic, Ranka (et al.) - Domain-Specific Modeling: Towards a Food and Drink Gazetteer / Tagarev, Andrey (et al.) - Analysing Entity Context in Multilingual Wikipedia to Support Entity-Centric Retrieval Applications / Zhou, Yiwei (et al.)
    Date
    1. 2.2016 18:25:22
  5. Chianese, A.; Cantone, F.; Caropreso, M.; Moscato, V.: ARCHAEOLOGY 2.0 : Cultural E-Learning tools and distributed repositories supported by SEMANTICA, a System for Learning Object Retrieval and Adaptive Courseware Generation for e-learning environments. (2010) 0.09
    0.08764 = product of:
      0.13146 = sum of:
        0.035974823 = weight(_text_:based in 3733) [ClassicSimilarity], result of:
          0.035974823 = score(doc=3733,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23539014 = fieldWeight in 3733, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3733)
        0.09548517 = sum of:
          0.061123267 = weight(_text_:training in 3733) [ClassicSimilarity], result of:
            0.061123267 = score(doc=3733,freq=2.0), product of:
              0.23690371 = queryWeight, product of:
                4.67046 = idf(docFreq=1125, maxDocs=44218)
                0.050723847 = queryNorm
              0.2580089 = fieldWeight in 3733, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.67046 = idf(docFreq=1125, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3733)
          0.034361906 = weight(_text_:22 in 3733) [ClassicSimilarity], result of:
            0.034361906 = score(doc=3733,freq=2.0), product of:
              0.17762627 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050723847 = queryNorm
              0.19345059 = fieldWeight in 3733, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3733)
      0.6666667 = coord(2/3)
    
    Abstract
    The focus of the present research has been the development and the application to Virtual Archaeology of a Web-Based framework for Learning Objects indexing and retrieval. The paper presents the main outcomes of a experimentation carried out by an interdisciplinary group of Federico II University of Naples. Our equipe is composed by researchers both in ICT and in Humanities disciplines, in particular in the domain of Virtual Archaeology and Cultural Heritage Informatics in order to develop specific ICT methodological approaches to Virtual Archaeology. The methodological background is the progressive diffusion of Web 2.0 technologies and the attempt to analyze their impact and perspectives in the Cultural Heritage field. In particular, we approached the specific requirements of the so called Learning 2.0, and the possibility to improve the automation of modular courseware generation in Virtual Archaeology Didactics. The developed framework was called SEMANTICA, and it was applied to Virtual Archaeology Domain Ontologies in order to generate a didactic course in a semi-automated way. The main results of this test and the first students feedback on the course fruition will be presented and discussed..
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Theme
    Computer Based Training
  6. Olivares-Rodríguez, C.; Guenaga, M.; Garaizar, P.: Using children's search patterns to predict the quality of their creative problem solving (2018) 0.09
    0.08764 = product of:
      0.13146 = sum of:
        0.035974823 = weight(_text_:based in 4635) [ClassicSimilarity], result of:
          0.035974823 = score(doc=4635,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23539014 = fieldWeight in 4635, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4635)
        0.09548517 = sum of:
          0.061123267 = weight(_text_:training in 4635) [ClassicSimilarity], result of:
            0.061123267 = score(doc=4635,freq=2.0), product of:
              0.23690371 = queryWeight, product of:
                4.67046 = idf(docFreq=1125, maxDocs=44218)
                0.050723847 = queryNorm
              0.2580089 = fieldWeight in 4635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.67046 = idf(docFreq=1125, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4635)
          0.034361906 = weight(_text_:22 in 4635) [ClassicSimilarity], result of:
            0.034361906 = score(doc=4635,freq=2.0), product of:
              0.17762627 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050723847 = queryNorm
              0.19345059 = fieldWeight in 4635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4635)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to propose a computational model that implicitly predict the children's creative quality of solutions by analyzing the query pattern on a problem-solving-based lesson. Design/methodology/approach A search task related to the competencies acquired in the classroom was applied to automatically measure children' creativity. A blind review process of the creative quality was developed of 255 primary school students' solutions. Findings While there are many creativity training programs that have proven effective, many of these programs require measuring creativity previously which involves time-consuming tasks conducted by experienced reviewers, i.e. far from primary school classroom dynamics. The authors have developed a model that predicts the creative quality of the given solution using the search queries pattern as input. This model has been used to predict the creative quality of 255 primary school students' solutions with 80 percent sensitivity. Research limitations/implications Although the research was conducted with just one search task, participants come from two different countries. Therefore, the authors hope that this model provides detection of non-creative solutions to enable prompt intervention and improve the creative quality of solutions. Originality/value This is the first implicit classification model of query pattern in order to predict the children' creative quality of solutions. This model is based on a conceptual relation between the concept association of creative thinking and query chain model of information search.
    Date
    20. 1.2015 18:30:22
  7. Mai, F.; Galke, L.; Scherp, A.: Using deep learning for title-based semantic subject indexing to reach competitive performance to full-text (2018) 0.08
    0.079280265 = product of:
      0.11892039 = sum of:
        0.044059984 = weight(_text_:based in 4093) [ClassicSimilarity], result of:
          0.044059984 = score(doc=4093,freq=6.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.28829288 = fieldWeight in 4093, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4093)
        0.07486041 = product of:
          0.14972082 = sum of:
            0.14972082 = weight(_text_:training in 4093) [ClassicSimilarity], result of:
              0.14972082 = score(doc=4093,freq=12.0), product of:
                0.23690371 = queryWeight, product of:
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.050723847 = queryNorm
                0.6319902 = fieldWeight in 4093, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4093)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.9%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.
  8. Untiet-Kepp, S.-J.; Rösler, A.; Griesbaum, J.: CollabUni - Social Software zur Unterstützung kollaborativen Wissensmanagements und selbstgesteuerten Lernens (2010) 0.07
    0.07466623 = product of:
      0.11199935 = sum of:
        0.050876085 = weight(_text_:based in 2819) [ClassicSimilarity], result of:
          0.050876085 = score(doc=2819,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.33289194 = fieldWeight in 2819, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.078125 = fieldNorm(doc=2819)
        0.061123267 = product of:
          0.12224653 = sum of:
            0.12224653 = weight(_text_:training in 2819) [ClassicSimilarity], result of:
              0.12224653 = score(doc=2819,freq=2.0), product of:
                0.23690371 = queryWeight, product of:
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.050723847 = queryNorm
                0.5160178 = fieldWeight in 2819, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2819)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Theme
    Computer Based Training
  9. Zeilmann, K.; Beer, K.; dpa: Tablet statt Lehrbuch : wie die Digitalisierung die Unis verändert (2016) 0.07
    0.07466623 = product of:
      0.11199935 = sum of:
        0.050876085 = weight(_text_:based in 2699) [ClassicSimilarity], result of:
          0.050876085 = score(doc=2699,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.33289194 = fieldWeight in 2699, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.078125 = fieldNorm(doc=2699)
        0.061123267 = product of:
          0.12224653 = sum of:
            0.12224653 = weight(_text_:training in 2699) [ClassicSimilarity], result of:
              0.12224653 = score(doc=2699,freq=2.0), product of:
                0.23690371 = queryWeight, product of:
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.050723847 = queryNorm
                0.5160178 = fieldWeight in 2699, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2699)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Theme
    Computer Based Training
  10. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.07
    0.074059054 = product of:
      0.111088574 = sum of:
        0.08056292 = product of:
          0.24168874 = sum of:
            0.24168874 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.24168874 = score(doc=400,freq=2.0), product of:
                0.43003735 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050723847 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.03052565 = weight(_text_:based in 400) [ClassicSimilarity], result of:
          0.03052565 = score(doc=400,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.19973516 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  11. Ajiferuke, I.; Lu, K.; Wolfram, D.: ¬A comparison of citer and citation-based measure outcomes for multiple disciplines (2010) 0.07
    0.071304485 = product of:
      0.10695672 = sum of:
        0.08633958 = weight(_text_:based in 4000) [ClassicSimilarity], result of:
          0.08633958 = score(doc=4000,freq=16.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.56493634 = fieldWeight in 4000, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=4000)
        0.020617142 = product of:
          0.041234285 = sum of:
            0.041234285 = weight(_text_:22 in 4000) [ClassicSimilarity], result of:
              0.041234285 = score(doc=4000,freq=2.0), product of:
                0.17762627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050723847 = queryNorm
                0.23214069 = fieldWeight in 4000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4000)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Author research impact was examined based on citer analysis (the number of citers as opposed to the number of citations) for 90 highly cited authors grouped into three broad subject areas. Citer-based outcome measures were also compared with more traditional citation-based measures for levels of association. The authors found that there are significant differences in citer-based outcomes among the three broad subject areas examined and that there is a high degree of correlation between citer and citation-based measures for all measures compared, except for two outcomes calculated for the social sciences. Citer-based measures do produce slightly different rankings of authors based on citer counts when compared to more traditional citation counts. Examples are provided. Citation measures may not adequately address the influence, or reach, of an author because citations usually do not address the origin of the citation beyond self-citations.
    Date
    28. 9.2010 12:54:22
  12. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I.: Attention Is all you need (2017) 0.07
    0.0711273 = product of:
      0.10669095 = sum of:
        0.04316979 = weight(_text_:based in 970) [ClassicSimilarity], result of:
          0.04316979 = score(doc=970,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.28246817 = fieldWeight in 970, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=970)
        0.06352116 = product of:
          0.12704232 = sum of:
            0.12704232 = weight(_text_:training in 970) [ClassicSimilarity], result of:
              0.12704232 = score(doc=970,freq=6.0), product of:
                0.23690371 = queryWeight, product of:
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.050723847 = queryNorm
                0.53626144 = fieldWeight in 970, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.046875 = fieldNorm(doc=970)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
  13. Pobar, M. et al.: Multimodal image retrieval based on keywords and low-level image features (2016) 0.07
    0.07087437 = product of:
      0.10631155 = sum of:
        0.071949646 = weight(_text_:based in 2757) [ClassicSimilarity], result of:
          0.071949646 = score(doc=2757,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.47078028 = fieldWeight in 2757, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.078125 = fieldNorm(doc=2757)
        0.034361906 = product of:
          0.06872381 = sum of:
            0.06872381 = weight(_text_:22 in 2757) [ClassicSimilarity], result of:
              0.06872381 = score(doc=2757,freq=2.0), product of:
                0.17762627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050723847 = queryNorm
                0.38690117 = fieldWeight in 2757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2757)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    1. 2.2016 18:25:22
    Source
    Semantic keyword-based search on structured data sources: First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers. Eds.: J. Cardoso et al
  14. Stankovic, R. et al.: Indexing of textual databases based on lexical resources : a case study for Serbian (2016) 0.07
    0.07087437 = product of:
      0.10631155 = sum of:
        0.071949646 = weight(_text_:based in 2759) [ClassicSimilarity], result of:
          0.071949646 = score(doc=2759,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.47078028 = fieldWeight in 2759, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.078125 = fieldNorm(doc=2759)
        0.034361906 = product of:
          0.06872381 = sum of:
            0.06872381 = weight(_text_:22 in 2759) [ClassicSimilarity], result of:
              0.06872381 = score(doc=2759,freq=2.0), product of:
                0.17762627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050723847 = queryNorm
                0.38690117 = fieldWeight in 2759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2759)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    1. 2.2016 18:25:22
    Source
    Semantic keyword-based search on structured data sources: First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers. Eds.: J. Cardoso et al
  15. Yu, N.: Exploring co-training strategies for opinion detection (2014) 0.07
    0.06954181 = product of:
      0.10431271 = sum of:
        0.035974823 = weight(_text_:based in 1503) [ClassicSimilarity], result of:
          0.035974823 = score(doc=1503,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23539014 = fieldWeight in 1503, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1503)
        0.06833789 = product of:
          0.13667578 = sum of:
            0.13667578 = weight(_text_:training in 1503) [ClassicSimilarity], result of:
              0.13667578 = score(doc=1503,freq=10.0), product of:
                0.23690371 = queryWeight, product of:
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.050723847 = queryNorm
                0.57692546 = fieldWeight in 1503, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1503)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    For the last decade or so, sentiment analysis, which aims to automatically identify opinions, polarities, or emotions from user-generated content (e.g., blogs, tweets), has attracted interest from both academic and industrial communities. Most sentiment analysis strategies fall into 2 categories: lexicon-based and corpus-based approaches. While the latter often requires sentiment-labeled data to build a machine learning model, both approaches need sentiment-labeled data for evaluation. Unfortunately, most data domains lack sufficient quantities of labeled data, especially at the subdocument level. Semisupervised learning (SSL), a machine learning technique that requires only a few labeled examples and can automatically label unlabeled data, is a promising strategy to deal with the issue of insufficient labeled data. Although previous studies have shown promising results of applying various SSL algorithms to solve a sentiment-analysis problem, co-training, an SSL algorithm, has not attracted much attention for sentiment analysis largely due to its restricted assumptions. Therefore, this study focuses on revisiting co-training in depth and discusses several co-training strategies for sentiment analysis following a loose assumption. Results suggest that co-training can be more effective than can other currently adopted SSL methods for sentiment analysis.
  16. Xiao, G.: ¬A knowledge classification model based on the relationship between science and human needs (2013) 0.07
    0.068190396 = product of:
      0.10228559 = sum of:
        0.0610513 = weight(_text_:based in 138) [ClassicSimilarity], result of:
          0.0610513 = score(doc=138,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.39947033 = fieldWeight in 138, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.09375 = fieldNorm(doc=138)
        0.041234285 = product of:
          0.08246857 = sum of:
            0.08246857 = weight(_text_:22 in 138) [ClassicSimilarity], result of:
              0.08246857 = score(doc=138,freq=2.0), product of:
                0.17762627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050723847 = queryNorm
                0.46428138 = fieldWeight in 138, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=138)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 2.2013 12:36:34
  17. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.07
    0.06614238 = product of:
      0.09921357 = sum of:
        0.053708613 = product of:
          0.16112584 = sum of:
            0.16112584 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.16112584 = score(doc=5820,freq=2.0), product of:
                0.43003735 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050723847 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.045504954 = weight(_text_:based in 5820) [ClassicSimilarity], result of:
          0.045504954 = score(doc=5820,freq=10.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.2977476 = fieldWeight in 5820, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.6666667 = coord(2/3)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  18. Sidhom, S.: Numerical training for the information retrieval in medical imaginery : modeling of the Gabor filters (2014) 0.06
    0.06178547 = product of:
      0.18535641 = sum of:
        0.18535641 = sum of:
          0.12704232 = weight(_text_:training in 1453) [ClassicSimilarity], result of:
            0.12704232 = score(doc=1453,freq=6.0), product of:
              0.23690371 = queryWeight, product of:
                4.67046 = idf(docFreq=1125, maxDocs=44218)
                0.050723847 = queryNorm
              0.53626144 = fieldWeight in 1453, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.67046 = idf(docFreq=1125, maxDocs=44218)
                0.046875 = fieldNorm(doc=1453)
          0.05831409 = weight(_text_:22 in 1453) [ClassicSimilarity], result of:
            0.05831409 = score(doc=1453,freq=4.0), product of:
              0.17762627 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050723847 = queryNorm
              0.32829654 = fieldWeight in 1453, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1453)
      0.33333334 = coord(1/3)
    
    Abstract
    We propose, in this work, a method of medical image indexing and research by exploiting their own digital component. We represent the image digital component by a vector of characteristics what we will call: numerical signature of the image. Using the Gabor wavelets, each image of the training medical base is indexed and represented by its characteristics (texture). We thus will build (in offline) a numerical data base of signature. This enables us (in online) to carry out a numerical search for similarity compared to a request image. In order to evaluate the performances we tested our application on a training mammography images basis. The results obtained show well that the representation of the digital component of the images proves to be significant as regards search for information in imagery.
    Date
    5. 9.2014 18:22:35
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  19. Beutelspacher, L.: Fördern Web 2.0 und mobile Technologien das Lernen? : Ein Bericht über die ICT 2011 in Hongkong (2011) 0.06
    0.05973299 = product of:
      0.08959948 = sum of:
        0.040700868 = weight(_text_:based in 4901) [ClassicSimilarity], result of:
          0.040700868 = score(doc=4901,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.26631355 = fieldWeight in 4901, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0625 = fieldNorm(doc=4901)
        0.048898615 = product of:
          0.09779723 = sum of:
            0.09779723 = weight(_text_:training in 4901) [ClassicSimilarity], result of:
              0.09779723 = score(doc=4901,freq=2.0), product of:
                0.23690371 = queryWeight, product of:
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.050723847 = queryNorm
                0.41281426 = fieldWeight in 4901, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4901)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Theme
    Computer Based Training
  20. Engelbrecht, T.; Lankau, R.: Technologie in unseren Schulen schadet mehr, als sie nützt. (2017) 0.06
    0.05973299 = product of:
      0.08959948 = sum of:
        0.040700868 = weight(_text_:based in 3722) [ClassicSimilarity], result of:
          0.040700868 = score(doc=3722,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.26631355 = fieldWeight in 3722, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0625 = fieldNorm(doc=3722)
        0.048898615 = product of:
          0.09779723 = sum of:
            0.09779723 = weight(_text_:training in 3722) [ClassicSimilarity], result of:
              0.09779723 = score(doc=3722,freq=2.0), product of:
                0.23690371 = queryWeight, product of:
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.050723847 = queryNorm
                0.41281426 = fieldWeight in 3722, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3722)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Theme
    Computer Based Training

Languages

  • e 1879
  • d 201
  • f 2
  • i 2
  • a 1
  • hu 1
  • m 1
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 1899
  • el 178
  • m 105
  • s 44
  • x 22
  • r 10
  • b 5
  • i 1
  • p 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications