Search (4988 results, page 1 of 250)

  • × year_i:[2000 TO 2010}
  1. White, R.W.; Marchionini, G.; Muresan, G.: Evaluating exploratory search systems : introduction to special topic issue of information processing and management (2008) 0.08
    0.07858936 = product of:
      0.15717871 = sum of:
        0.13343097 = weight(_text_:processing in 5025) [ClassicSimilarity], result of:
          0.13343097 = score(doc=5025,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.7590276 = fieldWeight in 5025, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.09375 = fieldNorm(doc=5025)
        0.02374774 = product of:
          0.07124322 = sum of:
            0.07124322 = weight(_text_:29 in 5025) [ClassicSimilarity], result of:
              0.07124322 = score(doc=5025,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.46638384 = fieldWeight in 5025, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5025)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Date
    29. 7.2008 12:28:57
    Source
    Information processing and management. 44(2008) no.2, S.433-436
  2. Moens, M.-F.; Dumortier, J.: Text categorization : the assignment of subject descriptors to magazine articles (2000) 0.07
    0.06889032 = product of:
      0.13778064 = sum of:
        0.11007494 = weight(_text_:processing in 3329) [ClassicSimilarity], result of:
          0.11007494 = score(doc=3329,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.6261658 = fieldWeight in 3329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.109375 = fieldNorm(doc=3329)
        0.027705695 = product of:
          0.08311708 = sum of:
            0.08311708 = weight(_text_:29 in 3329) [ClassicSimilarity], result of:
              0.08311708 = score(doc=3329,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.5441145 = fieldWeight in 3329, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3329)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Date
    27.12.2007 19:29:10
    Source
    Information processing and management. 36(2000) no.6, S.841-861
  3. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.07
    0.0687657 = product of:
      0.1375314 = sum of:
        0.11007494 = weight(_text_:processing in 6438) [ClassicSimilarity], result of:
          0.11007494 = score(doc=6438,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.6261658 = fieldWeight in 6438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.109375 = fieldNorm(doc=6438)
        0.027456466 = product of:
          0.082369395 = sum of:
            0.082369395 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.082369395 = score(doc=6438,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Date
    11. 8.2001 16:22:19
    Source
    Information processing and management. 36(2000) no.1, S.3-36
  4. Saeed, K.; Dardzinska, A.: Natural language processing : word recognition without segmentation (2001) 0.06
    0.060537968 = product of:
      0.121075936 = sum of:
        0.07783473 = weight(_text_:processing in 7707) [ClassicSimilarity], result of:
          0.07783473 = score(doc=7707,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.4427661 = fieldWeight in 7707, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7707)
        0.043241203 = product of:
          0.064861804 = sum of:
            0.023303263 = weight(_text_:science in 7707) [ClassicSimilarity], result of:
              0.023303263 = score(doc=7707,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20372227 = fieldWeight in 7707, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7707)
            0.04155854 = weight(_text_:29 in 7707) [ClassicSimilarity], result of:
              0.04155854 = score(doc=7707,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.27205724 = fieldWeight in 7707, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7707)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    In an earlier article about the methods of recognition of machine and hand-written cursive letters, we presented a model showing the possibility of processing, classifying, and hence recognizing such scripts as images. The practical results we obtained encouraged us to extend the theory to an algorithm for word recognition. In this article, we introduce our ideas, describe our achievements, and present our results of testing words for recognition without segmentation. This would lead to the possibility of applying the methods used in this work, together with other previously developed algorithms to process whole sentences and, hence, written and spoken texts with the goal of automatic recognition.
    Date
    16.12.2001 18:29:38
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.14, S.1275-1279
  5. Greenberg, J.: Optimal query expansion (QE) processing methods with semantically encoded structured thesaurus terminology (2001) 0.06
    0.05938667 = product of:
      0.11877334 = sum of:
        0.08170945 = weight(_text_:processing in 5750) [ClassicSimilarity], result of:
          0.08170945 = score(doc=5750,freq=6.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.4648076 = fieldWeight in 5750, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=5750)
        0.03706389 = product of:
          0.055595834 = sum of:
            0.019974224 = weight(_text_:science in 5750) [ClassicSimilarity], result of:
              0.019974224 = score(doc=5750,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 5750, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5750)
            0.03562161 = weight(_text_:29 in 5750) [ClassicSimilarity], result of:
              0.03562161 = score(doc=5750,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23319192 = fieldWeight in 5750, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5750)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    While researchers have explored the value of structured thesauri as controlled vocabularies for general information retrieval (IR) activities, they have not identified the optimal query expansion (QE) processing methods for taking advantage of the semantic encoding underlying the terminology in these tools. The study reported on in this article addresses this question, and examined whether QE via semantically encoded thesauri terminology is more effective in the automatic or interactive processing environment. The research found that, regardless of end-users' retrieval goals, synonyms and partial synonyms (SYNs) and narrower terms (NTs) are generally good candidates for automatic QE and that related (RTs) are better candidates for interactive QE. The study also examined end-users' selection of semantically encoded thesauri terms for interactive QE, and explored how retrieval goals and QE processes may be combined in future thesauri-supported IR systems
    Date
    29. 9.2001 14:00:11
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.6, S.487-498
  6. Sparck Jones, K.; Walker, S.; Robertson, S.E.: ¬A probabilistic model of information retrieval : development and comparative experiments - part 1 (2000) 0.06
    0.059048843 = product of:
      0.118097685 = sum of:
        0.09434994 = weight(_text_:processing in 4181) [ClassicSimilarity], result of:
          0.09434994 = score(doc=4181,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.53671354 = fieldWeight in 4181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.09375 = fieldNorm(doc=4181)
        0.02374774 = product of:
          0.07124322 = sum of:
            0.07124322 = weight(_text_:29 in 4181) [ClassicSimilarity], result of:
              0.07124322 = score(doc=4181,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.46638384 = fieldWeight in 4181, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4181)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Date
    27.12.2007 19:27:29
    Source
    Information processing and management. 36(2000) no.6, S.779-808
  7. Gow, J.; Blandford, A.; Cunningham, S.J.: Special issue on digital libraries in the context of users' broader activities (2008) 0.06
    0.059048843 = product of:
      0.118097685 = sum of:
        0.09434994 = weight(_text_:processing in 6060) [ClassicSimilarity], result of:
          0.09434994 = score(doc=6060,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.53671354 = fieldWeight in 6060, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.09375 = fieldNorm(doc=6060)
        0.02374774 = product of:
          0.07124322 = sum of:
            0.07124322 = weight(_text_:29 in 6060) [ClassicSimilarity], result of:
              0.07124322 = score(doc=6060,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.46638384 = fieldWeight in 6060, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6060)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Date
    29. 7.2008 18:41:20
    Source
    Information processing and management. 44(2008) no.2, S.556-557
  8. Okada, M.; Ando, K.; Lee, S.S.; Hayashi, Y.; Aoe, J.I.: ¬An efficient substring search method by using delayed keyword extraction (2001) 0.06
    0.059048843 = product of:
      0.118097685 = sum of:
        0.09434994 = weight(_text_:processing in 6415) [ClassicSimilarity], result of:
          0.09434994 = score(doc=6415,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.53671354 = fieldWeight in 6415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.09375 = fieldNorm(doc=6415)
        0.02374774 = product of:
          0.07124322 = sum of:
            0.07124322 = weight(_text_:29 in 6415) [ClassicSimilarity], result of:
              0.07124322 = score(doc=6415,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.46638384 = fieldWeight in 6415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6415)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Date
    29. 3.2002 17:24:03
    Source
    Information processing and management. 37(2001) no.5, S.741-761
  9. Robertson, S.; Tait, J.: In Memoriam Karen Sparck Jones (2007) 0.06
    0.058942027 = product of:
      0.117884055 = sum of:
        0.09434994 = weight(_text_:processing in 2927) [ClassicSimilarity], result of:
          0.09434994 = score(doc=2927,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.53671354 = fieldWeight in 2927, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.09375 = fieldNorm(doc=2927)
        0.023534112 = product of:
          0.070602335 = sum of:
            0.070602335 = weight(_text_:22 in 2927) [ClassicSimilarity], result of:
              0.070602335 = score(doc=2927,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.46428138 = fieldWeight in 2927, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2927)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Date
    26.12.2007 14:22:47
    Source
    Information processing and management. 43(2007) no.6, S.1441-1446
  10. Farooq, U.; Ganoe, C.H.; Carroll, J.M.; Councill, I.G.; Giles, C.L.: Design and evaluation of awareness mechanisms in CiteSeer (2008) 0.06
    0.057529993 = product of:
      0.11505999 = sum of:
        0.03931248 = weight(_text_:processing in 2051) [ClassicSimilarity], result of:
          0.03931248 = score(doc=2051,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 2051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2051)
        0.075747505 = sum of:
          0.016645188 = weight(_text_:science in 2051) [ClassicSimilarity], result of:
            0.016645188 = score(doc=2051,freq=2.0), product of:
              0.11438741 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.043425296 = queryNorm
              0.1455159 = fieldWeight in 2051, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2051)
          0.029684676 = weight(_text_:29 in 2051) [ClassicSimilarity], result of:
            0.029684676 = score(doc=2051,freq=2.0), product of:
              0.15275662 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.043425296 = queryNorm
              0.19432661 = fieldWeight in 2051, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2051)
          0.029417641 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
            0.029417641 = score(doc=2051,freq=2.0), product of:
              0.15206799 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043425296 = queryNorm
              0.19345059 = fieldWeight in 2051, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2051)
      0.5 = coord(2/4)
    
    Abstract
    Awareness has been extensively studied in human computer interaction (HCI) and computer supported cooperative work (CSCW). The success of many collaborative systems hinges on effectively supporting awareness of different collaborators, their actions, and the process of creating shared work products. As digital libraries are increasingly becoming more than just repositories for information search and retrieval - essentially fostering collaboration among its community of users - awareness remains an unexplored research area in this domain. We are investigating awareness mechanisms in CiteSeer, a scholarly digital library for the computer and information science domain. CiteSeer users can be notified of new publication events (e.g., publication of a paper that cites one of their papers) using feeds as notification systems. We present three cumulative user studies - requirements elicitation, prototype evaluation, and naturalistic study - in the context of supporting CiteSeer feeds. Our results indicate that users prefer feeds that place target items in query-relevant contexts, and that preferred context varies with type of publication event. We found that users integrated feeds as part of their broader, everyday activities and used them as planning tools to collaborate with others.
    Date
    29. 7.2008 19:27:22
    Source
    Information processing and management. 44(2008) no.2, S.596-612
  11. Miller, U.; Teitelbaum, R.: Pre-coordination and post-coordination : past and future (2002) 0.05
    0.054590274 = product of:
      0.10918055 = sum of:
        0.0953277 = weight(_text_:processing in 1395) [ClassicSimilarity], result of:
          0.0953277 = score(doc=1395,freq=6.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.54227555 = fieldWeight in 1395, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1395)
        0.013852848 = product of:
          0.04155854 = sum of:
            0.04155854 = weight(_text_:29 in 1395) [ClassicSimilarity], result of:
              0.04155854 = score(doc=1395,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.27205724 = fieldWeight in 1395, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1395)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This article deals with the meaningful processing of information in relation to two systems of Information processing: pre-coordination and post-coordination. The different approaches are discussed, with emphasis an the need for a controlled vocabulary in information retrieval. Assigned indexing, which employs a controlled vocabulary, is described in detail. Types of indexing language can be divided into two broad groups - those using pre-coordinated terms and those depending an post-coordination. They represent two different basic approaches in processing and Information retrieval. The historical development of these two approaches is described, as well as the two tools that apply to these approaches: thesauri and subject headings.
    Source
    Knowledge organization. 29(2002) no.2, S.87-93
  12. Computational linguistics and intelligent text processing : second international conference; Proceedings. CICLing 2001, Mexico City, Mexiko, 18.-24.2.2001 (2001) 0.05
    0.053833045 = product of:
      0.10766609 = sum of:
        0.09434994 = weight(_text_:processing in 3177) [ClassicSimilarity], result of:
          0.09434994 = score(doc=3177,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.53671354 = fieldWeight in 3177, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.09375 = fieldNorm(doc=3177)
        0.01331615 = product of:
          0.03994845 = sum of:
            0.03994845 = weight(_text_:science in 3177) [ClassicSimilarity], result of:
              0.03994845 = score(doc=3177,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.34923816 = fieldWeight in 3177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3177)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Series
    Lecture notes in computer science; vol.2004
  13. Innovations and advanced techniques in systems, computing sciences and software engineering (2008) 0.05
    0.052952807 = product of:
      0.105905615 = sum of:
        0.09629551 = weight(_text_:processing in 4319) [ClassicSimilarity], result of:
          0.09629551 = score(doc=4319,freq=12.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.547781 = fieldWeight in 4319, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
        0.009610103 = product of:
          0.02883031 = sum of:
            0.02883031 = weight(_text_:science in 4319) [ClassicSimilarity], result of:
              0.02883031 = score(doc=4319,freq=6.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.25204095 = fieldWeight in 4319, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4319)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Software Engineering, Computer Engineering, and Systems Engineering and Sciences. Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes selected papers form the conference proceedings of the International Conference on Systems, Computing Sciences and Software Engineering (SCSS 2007) which was part of the International Joint Conferences on Computer, Information and Systems Sciences and Engineering (CISSE 2007).
    Content
    Inhalt: Image and Pattern Recognition: Compression, Image processing, Signal Processing Architectures, Signal Processing for Communication, Signal Processing Implementation, Speech Compression, and Video Coding Architectures. Languages and Systems: Algorithms, Databases, Embedded Systems and Applications, File Systems and I/O, Geographical Information Systems, Kernel and OS Structures, Knowledge Based Systems, Modeling and Simulation, Object Based Software Engineering, Programming Languages, and Programming Models and tools. Parallel Processing: Distributed Scheduling, Multiprocessing, Real-time Systems, Simulation Modeling and Development, and Web Applications. New trends in computing: Computers for People of Special Needs, Fuzzy Inference, Human Computer Interaction, Incremental Learning, Internet-based Computing Models, Machine Intelligence, Natural Language Processing, Neural Networks, and Online Decision Support System
    LCSH
    Computer Science
    Subject
    Computer Science
  14. Miranda-Arguedas, A.: Standardization of technical processes in Central American libraries (2002) 0.05
    0.052392907 = product of:
      0.104785815 = sum of:
        0.08895399 = weight(_text_:processing in 5480) [ClassicSimilarity], result of:
          0.08895399 = score(doc=5480,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.5060184 = fieldWeight in 5480, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0625 = fieldNorm(doc=5480)
        0.015831826 = product of:
          0.047495477 = sum of:
            0.047495477 = weight(_text_:29 in 5480) [ClassicSimilarity], result of:
              0.047495477 = score(doc=5480,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.31092256 = fieldWeight in 5480, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5480)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This article discusses the standardization of technical processes in Central American libraries. Topics covered include tools used for document analysis, tools used to process documents, standards for document cataloging, development of collections on and by indigenous ethnic groups, means of access to collections of documents on and by indigenous ethnic groups, standards used for information processing, development of document databases, access to networks, need for training on information processing, and consortiums of Central American libraries.
    Date
    29. 7.2006 19:40:49
  15. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.05
    0.0523217 = product of:
      0.1046434 = sum of:
        0.08895399 = weight(_text_:processing in 5108) [ClassicSimilarity], result of:
          0.08895399 = score(doc=5108,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.5060184 = fieldWeight in 5108, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0625 = fieldNorm(doc=5108)
        0.01568941 = product of:
          0.047068227 = sum of:
            0.047068227 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.047068227 = score(doc=5108,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In this paper methods for both speeding up passage processing and examining more passages using parallel computers are explored. The number of passages processed are varied in order to examine the effect on retrieval effectiveness and efficiency. The particular algorithm applied has previously been used to good effect in Okapi experiments at TREC. This algorithm and the mechanism for applying parallel computing to speed up processing are described.
    Date
    20. 1.2007 18:30:22
  16. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.05
    0.05156062 = product of:
      0.10312124 = sum of:
        0.05559624 = weight(_text_:processing in 2541) [ClassicSimilarity], result of:
          0.05559624 = score(doc=2541,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3162615 = fieldWeight in 2541, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.047525004 = product of:
          0.071287505 = sum of:
            0.029684676 = weight(_text_:29 in 2541) [ClassicSimilarity], result of:
              0.029684676 = score(doc=2541,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19432661 = fieldWeight in 2541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
            0.041602828 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.041602828 = score(doc=2541,freq=4.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  17. Ofoghi, B.; Yearwood, J.; Ma, L.: ¬The impact of frame semantic annotation levels, frame-alignment techniques, and fusion methods on factoid answer processing (2009) 0.05
    0.050921954 = product of:
      0.10184391 = sum of:
        0.09629551 = weight(_text_:processing in 88) [ClassicSimilarity], result of:
          0.09629551 = score(doc=88,freq=12.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.547781 = fieldWeight in 88, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=88)
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 88) [ClassicSimilarity], result of:
              0.016645188 = score(doc=88,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=88)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The impact of frame semantic enrichment of texts on the task of factoid question answering (QA) is studied in this paper. In particular, we consider different techniques for answer processing with frame semantics: the level of semantic class identification and role assignment to texts, and the fusion of frame semantic-based answer-processing approaches with other methods used in the Text REtrieval Conference (TREC). The impact of each of these aspects on the overall performance of a QA system is analyzed in this paper. The TREC 2004 and TREC 2006 factoid question sets were used for the experiments. These demonstrate that the exploitation of encapsulated frame semantics in FrameNet in a shallow semantic parsing process can enhance answer-processing performance in factoid QA systems. This improvement is dependent on the level of semantic annotation, the frame semantic alignment method, and the method of fusing frame semantic-based answer-processing models with other existing models. A more comprehensively annotated environment with all different part-of-speech target predicates provides a higher chance of correct factoid answer retrieval where semantic alignment is based on both semantic classes and a relaxed set of semantic roles for answer span identification. Our experiments on fusion techniques of frame semantic-based and entity-based answer-processing models show that merging answer lists with respect to their scores and redundancy by exploiting a fusion function leads to a more effective overall factoid QA system compared to the use of individual models.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.2, S.247-263
  18. Liu, L.-J.; Shen, X.-B.; Zou, X.-C.: ¬An improved fast encoding algorithm for vector quantization (2004) 0.05
    0.04948889 = product of:
      0.09897778 = sum of:
        0.068091206 = weight(_text_:processing in 2067) [ClassicSimilarity], result of:
          0.068091206 = score(doc=2067,freq=6.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.38733965 = fieldWeight in 2067, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2067)
        0.030886576 = product of:
          0.046329863 = sum of:
            0.016645188 = weight(_text_:science in 2067) [ClassicSimilarity], result of:
              0.016645188 = score(doc=2067,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 2067, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2067)
            0.029684676 = weight(_text_:29 in 2067) [ClassicSimilarity], result of:
              0.029684676 = score(doc=2067,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19432661 = fieldWeight in 2067, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2067)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    In the current information age, people have to access various information. With the popularization of the Internet in all kinds of information fields and the development of communication technology, more and more information has to be processed in high speed. Data compression is one of the techniques in information data processing applications and spreading images. The objective of data compression is to reduce data rate for transmission and storage. Vector quantization (VQ) is a very powerful method for data compression. One of the key problems for the basic VQ method, i.e., full search algorithm, is that it is computationally intensive and is difficult for real time processing. Many fast encoding algorithms have been developed for this reason. In this paper, we present a reasonable half-L2-norm pyramid data structure and a new method of searching and processing codewords to significantly speed up the searching process especially for high dimensional vectors and codebook with large size; reduce the actual requirement for memory, which is preferred in hardware implementation system, e.g., SOC (system-on-chip); and produce the same encoded image quality as full search algorithm. Simulation results show that the proposed method outperforms some existing related fast encoding algorithms.
    Date
    9. 1.2004 14:28:29
    Source
    Journal of the American Society for Information Science and technology. 55(2004) no.1, S.81-87
  19. Fernández-Molina, J.C.; Peis, E.: ¬The moral rights of authors in the age of digital information (2001) 0.05
    0.049139336 = product of:
      0.09827867 = sum of:
        0.05503747 = weight(_text_:processing in 5582) [ClassicSimilarity], result of:
          0.05503747 = score(doc=5582,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3130829 = fieldWeight in 5582, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5582)
        0.043241203 = product of:
          0.064861804 = sum of:
            0.023303263 = weight(_text_:science in 5582) [ClassicSimilarity], result of:
              0.023303263 = score(doc=5582,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20372227 = fieldWeight in 5582, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5582)
            0.04155854 = weight(_text_:29 in 5582) [ClassicSimilarity], result of:
              0.04155854 = score(doc=5582,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.27205724 = fieldWeight in 5582, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5582)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    In addition to stipulating economic rights, the copyright laws of most nations grant authors a series of "moral rights." The development of digital information and the new possibilities for information processing and transmission have given added significance to moral rights. This article briefly explains the content and characteristics of moral rights, and assesses the most important aspects of legislation in this area. The basic problems of the digital environment with respect to moral rights are discussed, and some suggestions are made for the international harmonization of rules controlling these rights
    Date
    29. 9.2001 13:58:46
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.2, S.109-117
  20. Chen, M.; Liu, X.; Qin, J.: Semantic relation extraction from socially-generated tags : a methodology for metadata generation (2008) 0.05
    0.047498893 = product of:
      0.094997786 = sum of:
        0.05559624 = weight(_text_:processing in 2648) [ClassicSimilarity], result of:
          0.05559624 = score(doc=2648,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3162615 = fieldWeight in 2648, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2648)
        0.039401546 = product of:
          0.05910232 = sum of:
            0.029684676 = weight(_text_:29 in 2648) [ClassicSimilarity], result of:
              0.029684676 = score(doc=2648,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19432661 = fieldWeight in 2648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2648)
            0.029417641 = weight(_text_:22 in 2648) [ClassicSimilarity], result of:
              0.029417641 = score(doc=2648,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19345059 = fieldWeight in 2648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2648)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    The growing predominance of social semantics in the form of tagging presents the metadata community with both opportunities and challenges as for leveraging this new form of information content representation and for retrieval. One key challenge is the absence of contextual information associated with these tags. This paper presents an experiment working with Flickr tags as an example of utilizing social semantics sources for enriching subject metadata. The procedure included four steps: 1) Collecting a sample of Flickr tags, 2) Calculating cooccurrences between tags through mutual information, 3) Tracing contextual information of tag pairs via Google search results, 4) Applying natural language processing and machine learning techniques to extract semantic relations between tags. The experiment helped us to build a context sentence collection from the Google search results, which was then processed by natural language processing and machine learning algorithms. This new approach achieved a reasonably good rate of accuracy in assigning semantic relations to tag pairs. This paper also explores the implications of this approach for using social semantics to enrich subject metadata.
    Date
    20. 2.2009 10:29:07
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas

Languages

Types

  • a 4346
  • m 460
  • el 175
  • s 171
  • x 41
  • b 30
  • i 19
  • r 8
  • n 2
  • p 1
  • More… Less…

Themes

Subjects

Classifications