Search (500 results, page 1 of 25)

  • × language_ss:"e"
  • × year_i:[2010 TO 2020}
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.23
    0.23431563 = product of:
      0.46863127 = sum of:
        0.11715782 = product of:
          0.35147345 = sum of:
            0.35147345 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.35147345 = score(doc=1826,freq=2.0), product of:
                0.3752265 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04425879 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.35147345 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.35147345 = score(doc=1826,freq=2.0), product of:
            0.3752265 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04425879 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(2/4)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.14
    0.14058939 = product of:
      0.28117877 = sum of:
        0.07029469 = product of:
          0.21088406 = sum of:
            0.21088406 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.21088406 = score(doc=400,freq=2.0), product of:
                0.3752265 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04425879 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.21088406 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.21088406 = score(doc=400,freq=2.0), product of:
            0.3752265 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04425879 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(2/4)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  3. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.12
    0.122843266 = product of:
      0.24568653 = sum of:
        0.046863124 = product of:
          0.14058937 = sum of:
            0.14058937 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.14058937 = score(doc=5820,freq=2.0), product of:
                0.3752265 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04425879 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.1988234 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.1988234 = score(doc=5820,freq=4.0), product of:
            0.3752265 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04425879 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(2/4)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  4. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.12
    0.11715782 = product of:
      0.23431563 = sum of:
        0.05857891 = product of:
          0.17573673 = sum of:
            0.17573673 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.17573673 = score(doc=4997,freq=2.0), product of:
                0.3752265 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04425879 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.17573673 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.17573673 = score(doc=4997,freq=2.0), product of:
            0.3752265 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04425879 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.5 = coord(2/4)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  5. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.11
    0.114436716 = product of:
      0.22887343 = sum of:
        0.21088406 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.21088406 = score(doc=563,freq=2.0), product of:
            0.3752265 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04425879 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.017989364 = product of:
          0.035978727 = sum of:
            0.035978727 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.035978727 = score(doc=563,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  6. Berti, Jr., D.W.; Lima, G.; Maculan, B.; Soergel, D.: Computer-assisted checking of conceptual relationships in a large thesaurus (2018) 0.10
    0.10125081 = product of:
      0.20250162 = sum of:
        0.1785158 = weight(_text_:assisted in 4721) [ClassicSimilarity], result of:
          0.1785158 = score(doc=4721,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.5970849 = fieldWeight in 4721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0625 = fieldNorm(doc=4721)
        0.02398582 = product of:
          0.04797164 = sum of:
            0.04797164 = weight(_text_:22 in 4721) [ClassicSimilarity], result of:
              0.04797164 = score(doc=4721,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.30952093 = fieldWeight in 4721, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4721)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    17. 1.2019 19:04:22
  7. Devaul, H.; Diekema, A.R.; Ostwald, J.: Computer-assisted assignment of educational standards using natural language processing (2011) 0.08
    0.07593811 = product of:
      0.15187623 = sum of:
        0.13388686 = weight(_text_:assisted in 4199) [ClassicSimilarity], result of:
          0.13388686 = score(doc=4199,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.44781366 = fieldWeight in 4199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.046875 = fieldNorm(doc=4199)
        0.017989364 = product of:
          0.035978727 = sum of:
            0.035978727 = weight(_text_:22 in 4199) [ClassicSimilarity], result of:
              0.035978727 = score(doc=4199,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.23214069 = fieldWeight in 4199, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4199)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 1.2011 14:25:32
  8. Normore, L.F.: "Here be dragons" : a wayfinding approach to teaching cataloguing (2012) 0.05
    0.05055438 = product of:
      0.20221752 = sum of:
        0.20221752 = sum of:
          0.17223524 = weight(_text_:instruction in 1903) [ClassicSimilarity], result of:
            0.17223524 = score(doc=1903,freq=8.0), product of:
              0.26266864 = queryWeight, product of:
                5.934836 = idf(docFreq=317, maxDocs=44218)
                0.04425879 = queryNorm
              0.65571296 = fieldWeight in 1903, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                5.934836 = idf(docFreq=317, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1903)
          0.029982276 = weight(_text_:22 in 1903) [ClassicSimilarity], result of:
            0.029982276 = score(doc=1903,freq=2.0), product of:
              0.15498674 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04425879 = queryNorm
              0.19345059 = fieldWeight in 1903, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1903)
      0.25 = coord(1/4)
    
    Abstract
    Teaching cataloguing requires the instructor to make strategic decisions about how to approach the variety and complexity of the field and to provide an adequate theoretical foundation while preparing students for their entry into the world of practice. Accompanying these challenges are the tactical demands of providing this instruction in a distance education environment. Rather than focusing on ways to support learners in catalogue record production, instructors may use a problem solving and decision making approach to instruction. In this paper, a way to conceptualize a decision making approach that builds on a foundation provided by theories of information navigation is described. This approach, which is called "wayfinding", teaches by having students learn to find their way in the sets of rules that are commonly used. The method focuses on instruction about the structural features of rule sets, providing basic definitions of what each of the "places" in the rule sets contain (e.g., "formatting personal names" in Chapter 22 of AACR2R) and about ways to navigate those structures, enabling students to learn not only about common rules but also about less well known cataloguing practices ("dragons"). It provides both pragmatic and pedagogical benefits and helps develop links between cataloguing practices and their theoretical foundations.
    Footnote
    Beitrag innerhalb eines special issue "Online delivery of cataloging and classification education and instruction"
  9. Fu, T.; Abbasi, A.; Chen, H.: ¬A focused crawler for Dark Web forums (2010) 0.05
    0.048312265 = product of:
      0.19324906 = sum of:
        0.19324906 = weight(_text_:assisted in 3471) [ClassicSimilarity], result of:
          0.19324906 = score(doc=3471,freq=6.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.64636344 = fieldWeight in 3471, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3471)
      0.25 = coord(1/4)
    
    Abstract
    The unprecedented growth of the Internet has given rise to the Dark Web, the problematic facet of the Web associated with cybercrime, hate, and extremism. Despite the need for tools to collect and analyze Dark Web forums, the covert nature of this part of the Internet makes traditional Web crawling techniques insufficient for capturing such content. In this study, we propose a novel crawling system designed to collect Dark Web forum content. The system uses a human-assisted accessibility approach to gain access to Dark Web forums. Several URL ordering features and techniques enable efficient extraction of forum postings. The system also includes an incremental crawler coupled with a recall-improvement mechanism intended to facilitate enhanced retrieval and updating of collected content. Experiments conducted to evaluate the effectiveness of the human-assisted accessibility approach and the recall-improvement-based, incremental-update procedure yielded favorable results. The human-assisted approach significantly improved access to Dark Web forums while the incremental crawler with recall improvement also outperformed standard periodic- and incremental-update approaches. Using the system, we were able to collect over 100 Dark Web forums from three regions. A case study encompassing link and content analysis of collected forums was used to illustrate the value and importance of gathering and analyzing content from such online communities.
  10. Balakrishnan, U.; Voß, J.; Soergel, D.: Towards integrated systems for KOS management, mapping, and access : Coli-conc and its collaborative computer-assisted KOS mapping tool Cocoda (2018) 0.04
    0.04462895 = product of:
      0.1785158 = sum of:
        0.1785158 = weight(_text_:assisted in 4825) [ClassicSimilarity], result of:
          0.1785158 = score(doc=4825,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.5970849 = fieldWeight in 4825, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0625 = fieldNorm(doc=4825)
      0.25 = coord(1/4)
    
  11. Bando, L.L.; Scholer, F.; Turpin, A.: Query-biased summary generation assisted by query expansion : temporality (2015) 0.04
    0.039446793 = product of:
      0.15778717 = sum of:
        0.15778717 = weight(_text_:assisted in 1820) [ClassicSimilarity], result of:
          0.15778717 = score(doc=1820,freq=4.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.5277535 = fieldWeight in 1820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1820)
      0.25 = coord(1/4)
    
    Abstract
    Query-biased summaries help users to identify which items returned by a search system should be read in full. In this article, we study the generation of query-biased summaries as a sentence ranking approach, and methods to evaluate their effectiveness. Using sentence-level relevance assessments from the TREC Novelty track, we gauge the benefits of query expansion to minimize the vocabulary mismatch problem between informational requests and sentence ranking methods. Our results from an intrinsic evaluation show that query expansion significantly improves the selection of short relevant sentences (5-13 words) between 7% and 11%. However, query expansion does not lead to improvements for sentences of medium (14-20 words) and long (21-29 words) lengths. In a separate crowdsourcing study, we analyze whether a summary composed of sentences ranked using query expansion was preferred over summaries not assisted by query expansion, rather than assessing sentences individually. We found that participants chose summaries aided by query expansion around 60% of the time over summaries using an unexpanded query. We conclude that query expansion techniques can benefit the selection of sentences for the construction of query-biased summaries at the summary level rather than at the sentence ranking level.
  12. Kutz, O.; Mossakowski, T.; Galinski, C.; Lange, C.: Towards a standard for heterogeneous ontology integration and interoperability (2011) 0.04
    0.039050333 = product of:
      0.15620133 = sum of:
        0.15620133 = weight(_text_:assisted in 114) [ClassicSimilarity], result of:
          0.15620133 = score(doc=114,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.52244925 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0546875 = fieldNorm(doc=114)
      0.25 = coord(1/4)
    
    Abstract
    Even though ontologies are widely being used to enable interoperability in information-rich endeavours, there is currently no united framework for ontology interoperability itself. Surprisingly little of the state of the art in modularity and structuring, e.g. in software engineering, has been applied to ontology engineering so far. However, application areas like Ambient Assisted Living (AAL), which require synchronization and orchestration of interoperable services, are in dire need of safe and secure ontology interoperability. OntoIOp (Ontology Integration and Interoperability), a new international standard proposed in ISO/TC 37/SC 3, aims at filling this gap.
  13. Kempf, A.O.; Ritze, D.; Eckert, K.; Zapilko, B.: New ways of mapping knowledge organization systems : using a semi­automatic matching­procedure for building up vocabulary crosswalks (2013) 0.03
    0.033471715 = product of:
      0.13388686 = sum of:
        0.13388686 = weight(_text_:assisted in 989) [ClassicSimilarity], result of:
          0.13388686 = score(doc=989,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.44781366 = fieldWeight in 989, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.046875 = fieldNorm(doc=989)
      0.25 = coord(1/4)
    
    Abstract
    Crosswalks between different vocabularies are an indispensable prerequisite for integrated and high quality search scenarios in distributed data environments. Offered through the web and linked with each other they act as a central link so that users can move back and forth between different data sources available online. In the past, crosswalks between different thesauri have primarily been developed manually. In the long run the intellectual updating of such crosswalks requires huge personnel expenses. Therefore, an integration of automatic matching procedures, as for example Ontology Matching Tools, seems an obvious need. On the basis of computer generated correspondences between the Thesaurus for Economics (STW) and the Thesaurus for the Social Sciences (TheSoz) our contribution will explore cross-border approaches between IT-assisted tools and procedures on the one hand and external quality measurements via domain experts on the other hand. The techniques that emerge enable semi-automatically performed vocabulary crosswalks.
  14. Nédellec, C.; Bossy, R.; Valsamou, D.; Ranoux, M.; Golik, W.; Sourdille, P.: Information extraction from bbliography for marker-assisted selection in wheat (2014) 0.03
    0.033471715 = product of:
      0.13388686 = sum of:
        0.13388686 = weight(_text_:assisted in 1592) [ClassicSimilarity], result of:
          0.13388686 = score(doc=1592,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.44781366 = fieldWeight in 1592, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.046875 = fieldNorm(doc=1592)
      0.25 = coord(1/4)
    
  15. Anguiano Peña, G.; Naumis Peña, C.: Method for selecting specialized terms from a general language corpus (2015) 0.03
    0.033471715 = product of:
      0.13388686 = sum of:
        0.13388686 = weight(_text_:assisted in 2196) [ClassicSimilarity], result of:
          0.13388686 = score(doc=2196,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.44781366 = fieldWeight in 2196, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.046875 = fieldNorm(doc=2196)
      0.25 = coord(1/4)
    
    Abstract
    Among the many aspects studied by library and information science are linguistic phenomena associated with document content analysis, for purposes of both information organization and retrieval. To this end, terms used in scientific and technical language must be recovered and their area of domain and behavior studied. Through language, society controls the knowledge available to people. Document content analysis, in this case of scientific texts, facilitates gathering knowledge of lexical units and their major applications and separating such specialized terms from the general language, to create indexing languages. The model presented here or other lexicographic resources with similar characteristics may be useful in the near future, in computer-assisted indexing or as corpora monitors, with respect to new text analyses or specialized corpora. Thus, using techniques for document content analysis of a lexicographically labeled general language corpus proposed herein, components which enable the extraction of lexical units from specialized language may be obtained and characterized.
  16. Abdi, A.; Idris, N.; Alguliev, R.M.; Aliguliyev, R.M.: Automatic summarization assessment through a combination of semantic and syntactic information for intelligent educational systems (2015) 0.03
    0.033471715 = product of:
      0.13388686 = sum of:
        0.13388686 = weight(_text_:assisted in 2681) [ClassicSimilarity], result of:
          0.13388686 = score(doc=2681,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.44781366 = fieldWeight in 2681, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.046875 = fieldNorm(doc=2681)
      0.25 = coord(1/4)
    
    Abstract
    Summary writing is a process for creating a short version of a source text. It can be used as a measure of understanding. As grading students' summaries is a very time-consuming task, computer-assisted assessment can help teachers perform the grading more effectively. Several techniques, such as BLEU, ROUGE, N-gram co-occurrence, Latent Semantic Analysis (LSA), LSA_Ngram and LSA_ERB, have been proposed to support the automatic assessment of students' summaries. Since these techniques are more suitable for long texts, their performance is not satisfactory for the evaluation of short summaries. This paper proposes a specialized method that works well in assessing short summaries. Our proposed method integrates the semantic relations between words, and their syntactic composition. As a result, the proposed method is able to obtain high accuracy and improve the performance compared with the current techniques. Experiments have displayed that it is to be preferred over the existing techniques. A summary evaluation system based on the proposed method has also been developed.
  17. Mitchell, J.S.; Panzer, M.: Dewey linked data : Making connections with old friends and new acquaintances (2012) 0.03
    0.027893096 = product of:
      0.111572385 = sum of:
        0.111572385 = weight(_text_:assisted in 305) [ClassicSimilarity], result of:
          0.111572385 = score(doc=305,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.37317806 = fieldWeight in 305, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=305)
      0.25 = coord(1/4)
    
    Abstract
    This paper explores the history, uses cases, and future plans associated with availability of the Dewey Decimal Classification (DDC) system as linked data. Parts of the Dewey Decimal Classification (DDC) system have been available as linked data since 2009. Initial efforts included the DDC Summaries (the top three levels of the DDC) in eleven languages exposed as linked data in dewey.info. In 2010, the content of dewey.info was further extended by the addition of assignable numbers and captions from the Abridged Edition 14 data files in English, Italian, and Vietnamese. During 2012, we will add assignable numbers and captions from the latest full edition database, DDC 23. In addition to the "old friends" of different Dewey language versions, institutions such as the British Library and Deutsche Nationalbibliothek have made use of Dewey linked data in bibliographic records and authority files, and AGROVOC has linked to our data at a general level. We expect to extend our linked data network shortly to "new acquaintances" such as GeoNames, ISO 639-3 language codes, and Mathematics Subject Classification. In particular, we will examine the linking process to GeoNames as an example of cross-domain vocabulary alignment. In addition to linking plans, we report on use cases that facilitate machine-assisted categorization and support discovery in the Semantic Web environment.
  18. Kempf, A.O.; Ritze, D.; Eckert, K.; Zapilko, B.: New ways of mapping knowledge organization systems : using a semi-automatic matching procedure for building up vocabulary crosswalks (2014) 0.03
    0.027893096 = product of:
      0.111572385 = sum of:
        0.111572385 = weight(_text_:assisted in 1371) [ClassicSimilarity], result of:
          0.111572385 = score(doc=1371,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.37317806 = fieldWeight in 1371, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1371)
      0.25 = coord(1/4)
    
    Abstract
    Crosswalks between different vocabularies are an indispensable prerequisite for integrated, high-quality search scenarios in distributed data environments where more than one controlled vocabulary is in use. Offered through the web and linked with each other they act as a central link so that users can move back and forth between different online data sources. In the past, crosswalks between different thesauri have usually been developed manually. In the long run the intellectual updating of such crosswalks is expensive. An obvious solution would be to apply automatic matching procedures, such as the so-called ontology matching tools. On the basis of computer-generated correspondences between the Thesaurus for the Social Sciences (TSS) and the Thesaurus for Economics (STW), our contribution explores the trade-off between IT-assisted tools and procedures on the one hand and external quality evaluation by domain experts on the other hand. This paper presents techniques for semi-automatic development and maintenance of vocabulary crosswalks. The performance of multiple matching tools was first evaluated against a reference set of correct mappings, then the tools were used to generate new mappings. It was concluded that the ontology matching tools can be used effectively to speed up the work of domain experts. By optimizing the workflow, the method promises to facilitate sustained updating of high-quality vocabulary crosswalks.
  19. Li, Y.; Xu, S.; Luo, X.; Lin, S.: ¬A new algorithm for product image search based on salient edge characterization (2014) 0.03
    0.027893096 = product of:
      0.111572385 = sum of:
        0.111572385 = weight(_text_:assisted in 1552) [ClassicSimilarity], result of:
          0.111572385 = score(doc=1552,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.37317806 = fieldWeight in 1552, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1552)
      0.25 = coord(1/4)
    
    Abstract
    Visually assisted product image search has gained increasing popularity because of its capability to greatly improve end users' e-commerce shopping experiences. Different from general-purpose content-based image retrieval (CBIR) applications, the specific goal of product image search is to retrieve and rank relevant products from a large-scale product database to visually assist a user's online shopping experience. In this paper, we explore the problem of product image search through salient edge characterization and analysis, for which we propose a novel image search method coupled with an interactive user region-of-interest indication function. Given a product image, the proposed approach first extracts an edge map, based on which contour curves are further extracted. We then segment the extracted contours into fragments according to the detected contour corners. After that, a set of salient edge elements is extracted from each product image. Based on salient edge elements matching and similarity evaluation, the method derives a new pairwise image similarity estimate. Using the new image similarity, we can then retrieve product images. To evaluate the performance of our algorithm, we conducted 120 sessions of querying experiments on a data set comprised of around 13k product images collected from multiple, real-world e-commerce websites. We compared the performance of the proposed method with that of a bag-of-words method (Philbin, Chum, Isard, Sivic, & Zisserman, 2008) and a Pyramid Histogram of Orientated Gradients (PHOG) method (Bosch, Zisserman, & Munoz, 2007). Experimental results demonstrate that the proposed method improves the performance of example-based product image retrieval.
  20. Golub, K.; Soergel, D.; Buchanan, G.; Tudhope, D.; Lykke, M.; Hiom, D.: ¬A framework for evaluating automatic indexing or classification in the context of retrieval (2016) 0.03
    0.027893096 = product of:
      0.111572385 = sum of:
        0.111572385 = weight(_text_:assisted in 3311) [ClassicSimilarity], result of:
          0.111572385 = score(doc=3311,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.37317806 = fieldWeight in 3311, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3311)
      0.25 = coord(1/4)
    
    Abstract
    Tools for automatic subject assignment help deal with scale and sustainability in creating and enriching metadata, establishing more connections across and between resources and enhancing consistency. Although some software vendors and experimental researchers claim the tools can replace manual subject indexing, hard scientific evidence of their performance in operating information environments is scarce. A major reason for this is that research is usually conducted in laboratory conditions, excluding the complexities of real-life systems and situations. The article reviews and discusses issues with existing evaluation approaches such as problems of aboutness and relevance assessments, implying the need to use more than a single "gold standard" method when evaluating indexing and retrieval, and proposes a comprehensive evaluation framework. The framework is informed by a systematic review of the literature on evaluation approaches: evaluating indexing quality directly through assessment by an evaluator or through comparison with a gold standard, evaluating the quality of computer-assisted indexing directly in the context of an indexing workflow, and evaluating indexing quality indirectly through analyzing retrieval performance.

Types

  • a 459
  • m 27
  • el 18
  • s 12
  • x 5
  • b 4
  • i 1
  • r 1
  • More… Less…

Subjects

Classifications