Search (38482 results, page 2 of 1925)

  1. Paletta, F.C.; Malheiro da Silva, A.: Information access in the digital era : document visualization strategy (2018) 0.14
    0.14459184 = product of:
      0.19278911 = sum of:
        0.08687113 = weight(_text_:da in 4853) [ClassicSimilarity], result of:
          0.08687113 = score(doc=4853,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.42410251 = fieldWeight in 4853, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0625 = fieldNorm(doc=4853)
        0.10237063 = product of:
          0.20474125 = sum of:
            0.20474125 = weight(_text_:silva in 4853) [ClassicSimilarity], result of:
              0.20474125 = score(doc=4853,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.65108216 = fieldWeight in 4853, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4853)
          0.5 = coord(1/2)
        0.00354734 = product of:
          0.00709468 = sum of:
            0.00709468 = weight(_text_:a in 4853) [ClassicSimilarity], result of:
              0.00709468 = score(doc=4853,freq=4.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.14413087 = fieldWeight in 4853, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4853)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Type
    a
  2. Bastos Leite, A.J. de; Paletta, F.C.; Silva Martins, M.F. da; Silveira, T.: ¬The role of neuroscience in information and knowledge appropriation (2018) 0.14
    0.14381258 = product of:
      0.19175011 = sum of:
        0.08687113 = weight(_text_:da in 4863) [ClassicSimilarity], result of:
          0.08687113 = score(doc=4863,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.42410251 = fieldWeight in 4863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0625 = fieldNorm(doc=4863)
        0.10237063 = product of:
          0.20474125 = sum of:
            0.20474125 = weight(_text_:silva in 4863) [ClassicSimilarity], result of:
              0.20474125 = score(doc=4863,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.65108216 = fieldWeight in 4863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4863)
          0.5 = coord(1/2)
        0.0025083479 = product of:
          0.0050166957 = sum of:
            0.0050166957 = weight(_text_:a in 4863) [ClassicSimilarity], result of:
              0.0050166957 = score(doc=4863,freq=2.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.10191591 = fieldWeight in 4863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4863)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Type
    a
  3. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.14
    0.14374939 = product of:
      0.19166586 = sum of:
        0.056502875 = product of:
          0.16950862 = sum of:
            0.16950862 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
              0.16950862 = score(doc=692,freq=2.0), product of:
                0.3619285 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04269026 = queryNorm
                0.46834838 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.33333334 = coord(1/3)
        0.13244762 = product of:
          0.26489523 = sum of:
            0.26489523 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
              0.26489523 = score(doc=692,freq=2.0), product of:
                0.4524431 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.04269026 = queryNorm
                0.5854775 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(1/2)
        0.0027153667 = product of:
          0.0054307333 = sum of:
            0.0054307333 = weight(_text_:a in 692) [ClassicSimilarity], result of:
              0.0054307333 = score(doc=692,freq=6.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.11032722 = fieldWeight in 692, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    What is the difference between Piaget's constructivism and Papert's "constructionism"? Beyond the mere play on the words, I think the distinction holds, and that integrating both views can enrich our understanding of how people learn and grow. Piaget's constructivism offers a window into what children are interested in, and able to achieve, at different stages of their development. The theory describes how children's ways of doing and thinking evolve over time, and under which circumstance children are more likely to let go of-or hold onto- their currently held views. Piaget suggests that children have very good reasons not to abandon their worldviews just because someone else, be it an expert, tells them they're wrong. Papert's constructionism, in contrast, focuses more on the art of learning, or 'learning to learn', and on the significance of making things in learning. Papert is interested in how learners engage in a conversation with [their own or other people's] artifacts, and how these conversations boost self-directed learning, and ultimately facilitate the construction of new knowledge. He stresses the importance of tools, media, and context in human development. Integrating both perspectives illuminates the processes by which individuals come to make sense of their experience, gradually optimizing their interactions with the world.
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
    Type
    a
  4. Lorenzon, E.J.; Gracioso, L. de Souza; Silva, M.D.P. da; Tinelli, M.; Amaral, R.M.; Faria, L.I.L. de; Hoffmann, W.A.M.: Controlled vocabulary used in intelligence information system for shoes (2012) 0.13
    0.12704104 = product of:
      0.16938806 = sum of:
        0.07601224 = weight(_text_:da in 863) [ClassicSimilarity], result of:
          0.07601224 = score(doc=863,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.3710897 = fieldWeight in 863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0546875 = fieldNorm(doc=863)
        0.08957431 = product of:
          0.17914861 = sum of:
            0.17914861 = weight(_text_:silva in 863) [ClassicSimilarity], result of:
              0.17914861 = score(doc=863,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.5696969 = fieldWeight in 863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=863)
          0.5 = coord(1/2)
        0.0038015132 = product of:
          0.0076030265 = sum of:
            0.0076030265 = weight(_text_:a in 863) [ClassicSimilarity], result of:
              0.0076030265 = score(doc=863,freq=6.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.1544581 = fieldWeight in 863, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=863)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The leather shoes and artifact production chain is very competitive in Brazil. It is important to develop strategies to identify, locate, organize and retrieve information to make this chain internationally competitive. In this context, we developed a specialized system with information used in competitive intelligence - InfoSIC. This paper discusses the controlled vocabulary developed for InfoSIC.
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan
    Type
    a
  5. Grobe, K.: Da wird die Goldprägung blaß : Lehrt die Konkurrenz das Fürchten: Die Encarta-Enzyklopädie von Microsoft (1998) 0.12
    0.12088943 = product of:
      0.24177887 = sum of:
        0.15202448 = weight(_text_:da in 7133) [ClassicSimilarity], result of:
          0.15202448 = score(doc=7133,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.7421794 = fieldWeight in 7133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.109375 = fieldNorm(doc=7133)
        0.08975439 = sum of:
          0.008779218 = weight(_text_:a in 7133) [ClassicSimilarity], result of:
            0.008779218 = score(doc=7133,freq=2.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.17835285 = fieldWeight in 7133, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.109375 = fieldNorm(doc=7133)
          0.08097517 = weight(_text_:22 in 7133) [ClassicSimilarity], result of:
            0.08097517 = score(doc=7133,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.5416616 = fieldWeight in 7133, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=7133)
      0.5 = coord(2/4)
    
    Date
    17. 7.1996 9:33:22
    Type
    a
  6. Brandão, W.C.; Santos, R.L.T.; Ziviani, N.; Moura, E.S. de; Silva, A.S. da: Learning to expand queries using entities (2014) 0.12
    0.11661854 = product of:
      0.15549138 = sum of:
        0.054294456 = weight(_text_:da in 1343) [ClassicSimilarity], result of:
          0.054294456 = score(doc=1343,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.26506406 = fieldWeight in 1343, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1343)
        0.063981645 = product of:
          0.12796329 = sum of:
            0.12796329 = weight(_text_:silva in 1343) [ClassicSimilarity], result of:
              0.12796329 = score(doc=1343,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.40692633 = fieldWeight in 1343, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1343)
          0.5 = coord(1/2)
        0.037215285 = sum of:
          0.008295582 = weight(_text_:a in 1343) [ClassicSimilarity], result of:
            0.008295582 = score(doc=1343,freq=14.0), product of:
              0.049223874 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.04269026 = queryNorm
              0.1685276 = fieldWeight in 1343, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1343)
          0.028919702 = weight(_text_:22 in 1343) [ClassicSimilarity], result of:
            0.028919702 = score(doc=1343,freq=2.0), product of:
              0.149494 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04269026 = queryNorm
              0.19345059 = fieldWeight in 1343, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1343)
      0.75 = coord(3/4)
    
    Abstract
    A substantial fraction of web search queries contain references to entities, such as persons, organizations, and locations. Recently, methods that exploit named entities have been shown to be more effective for query expansion than traditional pseudorelevance feedback methods. In this article, we introduce a supervised learning approach that exploits named entities for query expansion using Wikipedia as a repository of high-quality feedback documents. In contrast with existing entity-oriented pseudorelevance feedback approaches, we tackle query expansion as a learning-to-rank problem. As a result, not only do we select effective expansion terms but we also weigh these terms according to their predicted effectiveness. To this end, we exploit the rich structure of Wikipedia articles to devise discriminative term features, including each candidate term's proximity to the original query terms, as well as its frequency across multiple article fields and in category and infobox descriptors. Experiments on three Text REtrieval Conference web test collections attest the effectiveness of our approach, with gains of up to 23.32% in terms of mean average precision, 19.49% in terms of precision at 10, and 7.86% in terms of normalized discounted cumulative gain compared with a state-of-the-art approach for entity-oriented query expansion.
    Date
    22. 8.2014 17:07:50
    Type
    a
  7. Da Silva, A.M.; Azevedo, L.M. de; Nogueira, M.D.L.R.: ¬A aplicacao do SIPORbase : uma proposta de indexacao do manuscrito e do livro antigo (1995) 0.11
    0.1101815 = product of:
      0.14690867 = sum of:
        0.065153345 = weight(_text_:da in 1639) [ClassicSimilarity], result of:
          0.065153345 = score(doc=1639,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.31807688 = fieldWeight in 1639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=1639)
        0.07677797 = product of:
          0.15355594 = sum of:
            0.15355594 = weight(_text_:silva in 1639) [ClassicSimilarity], result of:
              0.15355594 = score(doc=1639,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.48831162 = fieldWeight in 1639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1639)
          0.5 = coord(1/2)
        0.004977349 = product of:
          0.009954698 = sum of:
            0.009954698 = weight(_text_:a in 1639) [ClassicSimilarity], result of:
              0.009954698 = score(doc=1639,freq=14.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.20223314 = fieldWeight in 1639, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1639)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    SIPORbase, the System for Indexing in Portuguese, was developed by the National Library of Portugal, based on the LCSH. In contrast to the Brunet-Parguez system used in France, SIPORbase is a coextensive indexing language. Its initial application in 1989 to current bibliography has been extended to the collection of codices. Experience with manuscripts only indicates a high degree of relevance in retrieval, from the several hundred subject headings created so far
    Content
    Revised version of a presentation given at a LIBER workshop on The Brunet-Parguez system for subject indexing of ancient books, held in Toulouse in Feb 1994
    Footnote
    Übers. d. Titels: The application of SIPORbase: a proposal for indexing ancient manuscripts and books
    Type
    a
  8. Cortez, E.; Herrera, M.R.; Silva, A.S. da; Moura, E.S. de; Neubert, M.: Lightweight methods for large-scale product categorization (2011) 0.11
    0.10990459 = product of:
      0.14653945 = sum of:
        0.065153345 = weight(_text_:da in 4758) [ClassicSimilarity], result of:
          0.065153345 = score(doc=4758,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.31807688 = fieldWeight in 4758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=4758)
        0.07677797 = product of:
          0.15355594 = sum of:
            0.15355594 = weight(_text_:silva in 4758) [ClassicSimilarity], result of:
              0.15355594 = score(doc=4758,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.48831162 = fieldWeight in 4758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4758)
          0.5 = coord(1/2)
        0.0046081296 = product of:
          0.009216259 = sum of:
            0.009216259 = weight(_text_:a in 4758) [ClassicSimilarity], result of:
              0.009216259 = score(doc=4758,freq=12.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.18723148 = fieldWeight in 4758, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4758)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In this article, we present a study about classification methods for large-scale categorization of product offers on e-shopping web sites. We present a study about the performance of previously proposed approaches and deployed a probabilistic approach to model the classification problem. We also studied an alternative way of modeling information about the description of product offers and investigated the usage of price and store of product offers as features adopted in the classification process. Our experiments used two collections of over a million product offers previously categorized by human editors and taxonomies of hundreds of categories from a real e-shopping web site. In these experiments, our method achieved an improvement of up to 9% in the quality of the categorization in comparison with the best baseline we have found.
    Type
    a
  9. Malheiro da Silva, A.; Ribeiro, F.: Documentation / Information and their paradigms : characterization and importance in research, education, and professional practice (2012) 0.11
    0.10990459 = product of:
      0.14653945 = sum of:
        0.065153345 = weight(_text_:da in 84) [ClassicSimilarity], result of:
          0.065153345 = score(doc=84,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.31807688 = fieldWeight in 84, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=84)
        0.07677797 = product of:
          0.15355594 = sum of:
            0.15355594 = weight(_text_:silva in 84) [ClassicSimilarity], result of:
              0.15355594 = score(doc=84,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.48831162 = fieldWeight in 84, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.046875 = fieldNorm(doc=84)
          0.5 = coord(1/2)
        0.0046081296 = product of:
          0.009216259 = sum of:
            0.009216259 = weight(_text_:a in 84) [ClassicSimilarity], result of:
              0.009216259 = score(doc=84,freq=12.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.18723148 = fieldWeight in 84, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=84)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Since 2004, the authors have designed a proposal of paradigms for the Documentation-Information field, which starts from a comprehensive meaning of the concept and is based on identifying the presence of a custodial, cultural, historicist- and-humanist, and technicist paradigm that has shaped the professional activity, education, and public policies of the archival, librarian, and museologist universe from the early 1800s to the mid-20th century. It also includes pointin out the emergence of a new post-custodial, informational, and scientific paradigm, generated by the profound changes taking place worldwide and that are summarized in strong, yet too generic, expressions such as "information era" or "globalization." This paper characterizes the two paradigms proposed, highlighting their dominant traits and showing their operational relevance at the level of education, research, and professional practice.
    Type
    a
  10. Moura, E.S. de; Fernandes, D.; Ribeiro-Neto, B.; Silva, A.S. da; Gonçalves, M.A.: Using structural information to improve search in Web collections (2010) 0.11
    0.109603465 = product of:
      0.14613795 = sum of:
        0.065153345 = weight(_text_:da in 4119) [ClassicSimilarity], result of:
          0.065153345 = score(doc=4119,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.31807688 = fieldWeight in 4119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=4119)
        0.07677797 = product of:
          0.15355594 = sum of:
            0.15355594 = weight(_text_:silva in 4119) [ClassicSimilarity], result of:
              0.15355594 = score(doc=4119,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.48831162 = fieldWeight in 4119, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4119)
          0.5 = coord(1/2)
        0.004206628 = product of:
          0.008413256 = sum of:
            0.008413256 = weight(_text_:a in 4119) [ClassicSimilarity], result of:
              0.008413256 = score(doc=4119,freq=10.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.1709182 = fieldWeight in 4119, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4119)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In this work, we investigate the problem of using the block structure of Web pages to improve ranking results. Starting with basic intuitions provided by the concepts of term frequency (TF) and inverse document frequency (IDF), we propose nine block-weight functions to distinguish the impact of term occurrences inside page blocks, instead of inside whole pages. These are then used to compute a modified BM25 ranking function. Using four distinct Web collections, we ran extensive experiments to compare our block-weight ranking formulas with two other baselines: (a) a BM25 ranking applied to full pages, and (b) a BM25 ranking that takes into account best blocks. Our methods suggest that our block-weighting ranking method is superior to all baselines across all collections we used and that average gain in precision figures from 5 to 20% are generated.
    Type
    a
  11. Costa Carvalho, A. da; Rossi, C.; Moura, E.S. de; Silva, A.S. da; Fernandes, D.: LePrEF: Learn to precompute evidence fusion for efficient query evaluation (2012) 0.11
    0.10929237 = product of:
      0.14572316 = sum of:
        0.07678396 = weight(_text_:da in 278) [ClassicSimilarity], result of:
          0.07678396 = score(doc=278,freq=4.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.37485722 = fieldWeight in 278, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0390625 = fieldNorm(doc=278)
        0.063981645 = product of:
          0.12796329 = sum of:
            0.12796329 = weight(_text_:silva in 278) [ClassicSimilarity], result of:
              0.12796329 = score(doc=278,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.40692633 = fieldWeight in 278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=278)
          0.5 = coord(1/2)
        0.004957558 = product of:
          0.009915116 = sum of:
            0.009915116 = weight(_text_:a in 278) [ClassicSimilarity], result of:
              0.009915116 = score(doc=278,freq=20.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.20142901 = fieldWeight in 278, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=278)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    State-of-the-art search engine ranking methods combine several distinct sources of relevance evidence to produce a high-quality ranking of results for each query. The fusion of information is currently done at query-processing time, which has a direct effect on the response time of search systems. Previous research also shows that an alternative to improve search efficiency in textual databases is to precompute term impacts at indexing time. In this article, we propose a novel alternative to precompute term impacts, providing a generic framework for combining any distinct set of sources of evidence by using a machine-learning technique. This method retains the advantages of producing high-quality results, but avoids the costs of combining evidence at query-processing time. Our method, called Learn to Precompute Evidence Fusion (LePrEF), uses genetic programming to compute a unified precomputed impact value for each term found in each document prior to query processing, at indexing time. Compared with previous research on precomputing term impacts, our method offers the advantage of providing a generic framework to precompute impact using any set of relevance evidence at any text collection, whereas previous research articles do not. The precomputed impact values are indexed and used later for computing document ranking at query-processing time. By doing so, our method effectively reduces the query processing to simple additions of such impacts. We show that this approach, while leading to results comparable to state-of-the-art ranking methods, also can lead to a significant decrease in computational costs during query processing.
    Type
    a
  12. Silva Motta, G. da; Almada Garcia, P.A. de; Quintella, R.H.: ¬A patento-scientometric approach to venture capital investment prioritization (2015) 0.11
    0.109270394 = product of:
      0.14569385 = sum of:
        0.065153345 = weight(_text_:da in 1728) [ClassicSimilarity], result of:
          0.065153345 = score(doc=1728,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.31807688 = fieldWeight in 1728, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=1728)
        0.07677797 = product of:
          0.15355594 = sum of:
            0.15355594 = weight(_text_:silva in 1728) [ClassicSimilarity], result of:
              0.15355594 = score(doc=1728,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.48831162 = fieldWeight in 1728, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1728)
          0.5 = coord(1/2)
        0.0037625222 = product of:
          0.0075250445 = sum of:
            0.0075250445 = weight(_text_:a in 1728) [ClassicSimilarity], result of:
              0.0075250445 = score(doc=1728,freq=8.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.15287387 = fieldWeight in 1728, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1728)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper proposes an approach to analyzing and prioritizing venture capital investments with the use of scientometric and patentometric indicators. The article highlights the importance of such investments in the development of technology-based companies and their positive impacts on the economic development of regions and countries. It also notes that the managers of venture capital funds struggle to objectify the evaluation of investment proposals. This paper analyzes the selection process of 10 companies, five of which received investments by the largest venture capital fund in Brazil and the other five of which were rejected by this same fund. We formulated scientometric and patentometric indicators related to each company and conducted a comparative analysis of each by considering the indicators grouped by the nonfinancial criteria (technology, market, and divestiture team) from analysis of the investment proposals. The proposed approach clarifies aspects of the criteria evaluated and contributes to the construction of a method for prioritizing venture capital investments.
    Type
    a
  13. Amorim, R.C.; Castro, J.A.; Silva, J.R. da; Ribeiro, C.: LabTablet: semantic metadata collection on a multi-domain laboratory notebook (2014) 0.11
    0.10889232 = product of:
      0.14518976 = sum of:
        0.065153345 = weight(_text_:da in 1583) [ClassicSimilarity], result of:
          0.065153345 = score(doc=1583,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.31807688 = fieldWeight in 1583, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.046875 = fieldNorm(doc=1583)
        0.07677797 = product of:
          0.15355594 = sum of:
            0.15355594 = weight(_text_:silva in 1583) [ClassicSimilarity], result of:
              0.15355594 = score(doc=1583,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.48831162 = fieldWeight in 1583, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1583)
          0.5 = coord(1/2)
        0.0032584397 = product of:
          0.0065168794 = sum of:
            0.0065168794 = weight(_text_:a in 1583) [ClassicSimilarity], result of:
              0.0065168794 = score(doc=1583,freq=6.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.13239266 = fieldWeight in 1583, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1583)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The value of research data is recognized, and so is the importance of the associated metadata to contextualize, describe and ultimately render them understandable in the long term. Laboratory notebooks are an excellent source of domain-specific metadata, but this paper-based approach can pose risks of data loss, while limiting the possibilities of collaborative metadata production. The paper discusses the advantages of tools to complement paper-based laboratory notebooks in capturing metadata, regardless of the research domain. We propose LabTablet, an electronic laboratory book aimed at the collection of metadata from the early stages of the research workflow. To evaluate the use of LabTablet and the proposed workflow, researchers in two domains were asked to perform a set of tasks and provided insights about their experience. By rethinking the workflow and helping researchers to actively contribute to data description, the research outputs can be described with generic and domain-dependent metadata, thus improving their chances of being deposited, reused and preserved.
    Type
    a
  14. Saldanha, G. Silva => Silva Saldanha, G.: 0.11
    0.10858045 = product of:
      0.4343218 = sum of:
        0.4343218 = product of:
          0.8686436 = sum of:
            0.8686436 = weight(_text_:silva in 4736) [ClassicSimilarity], result of:
              0.8686436 = score(doc=4736,freq=4.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                2.7623076 = fieldWeight in 4736, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.1875 = fieldNorm(doc=4736)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  15. Saldanha, G. Silva => Silva Saldanha, G.: 0.11
    0.10858045 = product of:
      0.4343218 = sum of:
        0.4343218 = product of:
          0.8686436 = sum of:
            0.8686436 = weight(_text_:silva in 4784) [ClassicSimilarity], result of:
              0.8686436 = score(doc=4784,freq=4.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                2.7623076 = fieldWeight in 4784, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.1875 = fieldNorm(doc=4784)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  16. Torres, R. de Silva -> Silva Torres, R. de: 0.11
    0.10858045 = product of:
      0.4343218 = sum of:
        0.4343218 = product of:
          0.8686436 = sum of:
            0.8686436 = weight(_text_:silva in 5321) [ClassicSimilarity], result of:
              0.8686436 = score(doc=5321,freq=4.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                2.7623076 = fieldWeight in 5321, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.1875 = fieldNorm(doc=5321)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  17. Silva, D. Soares- => Soares-Silva, D.: 0.11
    0.10858045 = product of:
      0.4343218 = sum of:
        0.4343218 = product of:
          0.8686436 = sum of:
            0.8686436 = weight(_text_:silva in 5946) [ClassicSimilarity], result of:
              0.8686436 = score(doc=5946,freq=4.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                2.7623076 = fieldWeight in 5946, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.1875 = fieldNorm(doc=5946)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  18. Cortez, E.; Silva, A.S. da; Gonçalves, M.A.; Mesquita, F.; Moura, E.S. de: ¬A flexible approach for extracting metadata from bibliographic citations (2009) 0.09
    0.09294644 = product of:
      0.12392859 = sum of:
        0.054294456 = weight(_text_:da in 2848) [ClassicSimilarity], result of:
          0.054294456 = score(doc=2848,freq=2.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            0.26506406 = fieldWeight in 2848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2848)
        0.063981645 = product of:
          0.12796329 = sum of:
            0.12796329 = weight(_text_:silva in 2848) [ClassicSimilarity], result of:
              0.12796329 = score(doc=2848,freq=2.0), product of:
                0.31446302 = queryWeight, product of:
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.04269026 = queryNorm
                0.40692633 = fieldWeight in 2848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3661537 = idf(docFreq=75, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2848)
          0.5 = coord(1/2)
        0.005652486 = product of:
          0.011304972 = sum of:
            0.011304972 = weight(_text_:a in 2848) [ClassicSimilarity], result of:
              0.011304972 = score(doc=2848,freq=26.0), product of:
                0.049223874 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04269026 = queryNorm
                0.22966442 = fieldWeight in 2848, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2848)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In this article we present FLUX-CiM, a novel method for extracting components (e.g., author names, article titles, venues, page numbers) from bibliographic citations. Our method does not rely on patterns encoding specific delimiters used in a particular citation style. This feature yields a high degree of automation and flexibility, and allows FLUX-CiM to extract from citations in any given format. Differently from previous methods that are based on models learned from user-driven training, our method relies on a knowledge base automatically constructed from an existing set of sample metadata records from a given field (e.g., computer science, health sciences, social sciences, etc.). These records are usually available on the Web or other public data repositories. To demonstrate the effectiveness and applicability of our proposed method, we present a series of experiments in which we apply it to extract bibliographic data from citations in articles of different fields. Results of these experiments exhibit precision and recall levels above 94% for all fields, and perfect extraction for the large majority of citations tested. In addition, in a comparison against a state-of-the-art information-extraction method, ours produced superior results without the training phase required by that method. Finally, we present a strategy for using bibliographic data resulting from the extraction process with FLUX-CiM to automatically update and expand the knowledge base of a given domain. We show that this strategy can be used to achieve good extraction results even if only a very small initial sample of bibliographic records is available for building the knowledge base.
    Type
    a
  19. Graça Simões, M. da => Simões, M. da Graça: 0.09
    0.09214076 = product of:
      0.36856303 = sum of:
        0.36856303 = weight(_text_:da in 4700) [ClassicSimilarity], result of:
          0.36856303 = score(doc=4700,freq=4.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            1.7993147 = fieldWeight in 4700, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.1875 = fieldNorm(doc=4700)
      0.25 = coord(1/4)
    
  20. Silveira, L. Reis da => Reis da Silveira, L.: 0.09
    0.09214076 = product of:
      0.36856303 = sum of:
        0.36856303 = weight(_text_:da in 4800) [ClassicSimilarity], result of:
          0.36856303 = score(doc=4800,freq=4.0), product of:
            0.20483522 = queryWeight, product of:
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.04269026 = queryNorm
            1.7993147 = fieldWeight in 4800, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7981725 = idf(docFreq=990, maxDocs=44218)
              0.1875 = fieldNorm(doc=4800)
      0.25 = coord(1/4)
    

Authors

Languages

Types

Themes

Subjects

Classifications