Search (10 results, page 1 of 1)

  • × type_ss:"el"
  1. Faro, S.; Francesconi, E.; Sandrucci, V.: Thesauri KOS analysis and selected thesaurus mapping methodology on the project case-study (2007) 0.03
    0.033196237 = product of:
      0.08299059 = sum of:
        0.05885388 = weight(_text_:study in 2227) [ClassicSimilarity], result of:
          0.05885388 = score(doc=2227,freq=4.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.4064256 = fieldWeight in 2227, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0625 = fieldNorm(doc=2227)
        0.02413671 = product of:
          0.04827342 = sum of:
            0.04827342 = weight(_text_:22 in 2227) [ClassicSimilarity], result of:
              0.04827342 = score(doc=2227,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.30952093 = fieldWeight in 2227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2227)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    - Introduction to the Thesaurus Interoperability problem - Analysis of the thesauri for the project case study - Overview of Schema/Ontology Mapping methodologies - The proposed approach for thesaurus mapping - Standards for implementing the proposed methodology
    Date
    7.11.2008 10:40:22
  2. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.03
    0.0322106 = product of:
      0.0805265 = sum of:
        0.062423967 = weight(_text_:study in 4649) [ClassicSimilarity], result of:
          0.062423967 = score(doc=4649,freq=8.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.43107945 = fieldWeight in 4649, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.018102532 = product of:
          0.036205065 = sum of:
            0.036205065 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.036205065 = score(doc=4649,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  3. Kashyap, M.M.: Application of integrative approach in the teaching of library science techniques and application of information technology (2011) 0.03
    0.028452847 = product of:
      0.071132116 = sum of:
        0.020807989 = weight(_text_:study in 4395) [ClassicSimilarity], result of:
          0.020807989 = score(doc=4395,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.14369315 = fieldWeight in 4395, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.03125 = fieldNorm(doc=4395)
        0.050324127 = product of:
          0.100648254 = sum of:
            0.100648254 = weight(_text_:teaching in 4395) [ClassicSimilarity], result of:
              0.100648254 = score(doc=4395,freq=6.0), product of:
                0.24199244 = queryWeight, product of:
                  5.433489 = idf(docFreq=524, maxDocs=44218)
                  0.044537213 = queryNorm
                0.41591486 = fieldWeight in 4395, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.433489 = idf(docFreq=524, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4395)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Today many libraries are using computers and allied information technologies to improve their work methods and services. Consequently, the libraries need such professional staff, or need to train the present one, who could face the challenges placed by the introduction of these technologies in the libraries. To meet the demand of such professional staff, the departments of Library and Information Science in India introduced new courses of studies to expose their students in the use and application of computers and other allied technologies. Some courses introduced are: Computer Application in Libraries; Systems Analysis and Design Technique; Design and Development of Computer-based Library Information Systems; Database Organisation and Design; Library Networking; Use and Application of Communication Technology, and so forth. It is felt that the computer and information technologies biased courses need to be restructured, revised, and more harmoniously blended with the traditional main stream courses of library and information science discipline. We must alter the strategy of teaching library techniques, such as classification, cataloguing, and library procedures, and the techniques of designing computer-based library information systems and services. The use and application of these techniques get interwoven when we shift from a manually operated library system's environment to computer-based library system's environment. As such, it becomes necessary that we must follow an integrative approach, when we teach these techniques to the students of library and information science or train library staff in the use and application of these techniques to design, develop and implement computer-based library information systems and services. In the following sections of this paper, we shall outline the likeness or correspondence between certain concepts and techniques formed by computer specialist and the one developed by the librarians, in their respective domains. We make use of these techniques (i.e. the techniques of both the domains) in the design and implementation of computer-based library information systems and services. As such, it is essential that lessons of study concerning the exposition of these supplementary and complementary techniques must be integrated.
    Source
    http://lisuncg.net/icl/blogs-news/madan-mohan-kashyap/2011/01/20/application-integrative-approach-teaching-library-science-
  4. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.03
    0.027896503 = product of:
      0.06974126 = sum of:
        0.04414041 = weight(_text_:study in 1967) [ClassicSimilarity], result of:
          0.04414041 = score(doc=1967,freq=4.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.3048192 = fieldWeight in 1967, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.025600849 = product of:
          0.051201697 = sum of:
            0.051201697 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.051201697 = score(doc=1967,freq=4.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  5. Liu, S.: Decomposing DDC synthesized numbers (1996) 0.02
    0.024931317 = product of:
      0.062328294 = sum of:
        0.026009986 = weight(_text_:study in 5969) [ClassicSimilarity], result of:
          0.026009986 = score(doc=5969,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.17961644 = fieldWeight in 5969, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5969)
        0.03631831 = product of:
          0.07263662 = sum of:
            0.07263662 = weight(_text_:teaching in 5969) [ClassicSimilarity], result of:
              0.07263662 = score(doc=5969,freq=2.0), product of:
                0.24199244 = queryWeight, product of:
                  5.433489 = idf(docFreq=524, maxDocs=44218)
                  0.044537213 = queryNorm
                0.30016068 = fieldWeight in 5969, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.433489 = idf(docFreq=524, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5969)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Much literature has been written speculating upon how classification can be used in online catalogs to improve information retrieval. While some empirical studies have been done exploring whether the direct use of traditional classification schemes designed for a manual environment is effective and efficient in the online environment, none has manipulated these manual classifications in such a w ay as to take full advantage of the power of both the classification and computer. It has been suggested by some authors, such as Wajenberg and Drabenstott, that this power could be realized if the individual components of synthesized DDC numbers could be identified and indexed. This paper looks at the feasibility of automatically decomposing DDC synthesized numbers and the implications of such decomposition for information retrieval. Based on an analysis of the instructions for synthesizing numbers in the main class Arts (700) and all DDC Tables, 17 decomposition rules were defined, 13 covering the Add Notes and four the Standard Subdivisions. 1,701 DDC synthesized numbers were decomposed by a computer system called DND (Dewey Number Decomposer), developed by the author. From the 1,701 numbers, 600 were randomly selected fo r examination by three judges, each evaluating 200 numbers. The decomposition success rate was 100% and it was concluded that synthesized DDC numbers can be accurately decomposed automatically. The study has implications for information retrieval, expert systems for assigning DDC numbers, automatic indexing, switching language development, enhancing classifiers' work, teaching library school students, and providing quality control for DDC number assignments. These implications were explored using a prototype retrieval system.
  6. Allo, P.; Baumgaertner, B.; D'Alfonso, S.; Fresco, N.; Gobbo, F.; Grubaugh, C.; Iliadis, A.; Illari, P.; Kerr, E.; Primiero, G.; Russo, F.; Schulz, C.; Taddeo, M.; Turilli, M.; Vakarelov, O.; Zenil, H.: ¬The philosophy of information : an introduction (2013) 0.02
    0.01856924 = product of:
      0.0464231 = sum of:
        0.015605992 = weight(_text_:study in 3380) [ClassicSimilarity], result of:
          0.015605992 = score(doc=3380,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.10776986 = fieldWeight in 3380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3380)
        0.030817106 = product of:
          0.061634213 = sum of:
            0.061634213 = weight(_text_:teaching in 3380) [ClassicSimilarity], result of:
              0.061634213 = score(doc=3380,freq=4.0), product of:
                0.24199244 = queryWeight, product of:
                  5.433489 = idf(docFreq=524, maxDocs=44218)
                  0.044537213 = queryNorm
                0.2546948 = fieldWeight in 3380, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.433489 = idf(docFreq=524, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3380)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Philosophy "done informationally" has been around a long time, but PI as a discipline is quite new. PI takes age-old philosophical debates and engages them with up-to-the minute conceptual issues generated by our ever-changing, information-laden world. This alters the philosophical debates, and makes them interesting to many more people - including many philosophically-minded people who aren't subscribing philosophers. We, the authors, are young researchers who think of our work as part of PI, taking this engaged approach. We're excited by it and want to teach it. Students are excited by it and want to study it. Writing a traditional textbook takes a while, and PI is moving quickly. A traditional textbook doesn't seem like the right approach for the philosophy of the information age. So we got together to take a new approach, team-writing this electronic text to make it available more rapidly and openly.
    Content
    Vgl. auch unter: http://www.socphilinfo.org/teaching/book-pi-intro: "This book serves as the main reference for an undergraduate course on Philosophy of Information. The book is written to be accessible to the typical undergraduate student of Philosophy and does not require propaedeutic courses in Logic, Epistemology or Ethics. Each chapter includes a rich collection of references for the student interested in furthering her understanding of the topics reviewed in the book. The book covers all the main topics of the Philosophy of Information and it should be considered an overview and not a comprehensive, in-depth analysis of a philosophical area. As a consequence, 'The Philosophy of Information: a Simple Introduction' does not contain research material as it is not aimed at graduate students or researchers. The book is available for free in multiple formats and it is updated every twelve months by the team of the p Research Network: Patrick Allo, Bert Baumgaertner, Anthony Beavers, Simon D'Alfonso, Penny Driscoll, Luciano Floridi, Nir Fresco, Carson Grubaugh, Phyllis Illari, Eric Kerr, Giuseppe Primiero, Federica Russo, Christoph Schulz, Mariarosaria Taddeo, Matteo Turilli, Orlin Vakarelov. (*) The version for 2013 is now available as a pdf. The content of this version will soon be integrated in the redesign of the teaching-section. The beta-version from last year will provisionally remain accessible through the Table of Content on this page."
  7. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.02
    0.016438173 = product of:
      0.041095432 = sum of:
        0.026009986 = weight(_text_:study in 2564) [ClassicSimilarity], result of:
          0.026009986 = score(doc=2564,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.17961644 = fieldWeight in 2564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2564)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.03017089 = score(doc=2564,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
  8. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.02
    0.016438173 = product of:
      0.041095432 = sum of:
        0.026009986 = weight(_text_:study in 2565) [ClassicSimilarity], result of:
          0.026009986 = score(doc=2565,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.17961644 = fieldWeight in 2565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2565)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.03017089 = score(doc=2565,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
    Date
    16. 1.2016 10:22:28
  9. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.01
    0.014432656 = product of:
      0.03608164 = sum of:
        0.027030373 = weight(_text_:study in 3035) [ClassicSimilarity], result of:
          0.027030373 = score(doc=3035,freq=6.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.18666288 = fieldWeight in 3035, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3035)
        0.009051266 = product of:
          0.018102532 = sum of:
            0.018102532 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
              0.018102532 = score(doc=3035,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.116070345 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3035)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
    Dr Howe and his colleagues do, however, believe that the study of diagrams can result in new insights. A figure showing new metabolic pathways in a cell, for example, may summarise hundreds of experiments. Since illustrations can convey important scientific concepts in this way, they think that browsing through related figures from different papers may help researchers come up with new theories. As Dr Howe puts it, "the unit of scientific currency is closer to the figure than to the paper." With this thought in mind, the team have created a website (viziometrics.org (http://viziometrics.org/) ) where the millions of images sorted by their program can be searched using key words. Their next plan is to extract the information from particular types of scientific figure, to create comprehensive "super" figures: a giant network of all the known chemical processes in a cell for example, or the best-available tree of life. At just one such superfigure per paper, though, the citation records of articles containing such all-embracing diagrams may very well undermine the correlation that prompted their creation in the first place. Call it the ultimate marriage of chart and science.
  10. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.01
    0.009862903 = product of:
      0.024657257 = sum of:
        0.015605992 = weight(_text_:study in 405) [ClassicSimilarity], result of:
          0.015605992 = score(doc=405,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.10776986 = fieldWeight in 405, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0234375 = fieldNorm(doc=405)
        0.009051266 = product of:
          0.018102532 = sum of:
            0.018102532 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.018102532 = score(doc=405,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Human brain size nearly quadrupled in the six million years since Homo last shared a common ancestor with chimpanzees, but human brains are thought to have decreased in volume since the end of the last Ice Age. The timing and reason for this decrease is enigmatic. Here we use change-point analysis to estimate the timing of changes in the rate of hominin brain evolution. We find that hominin brains experienced positive rate changes at 2.1 and 1.5 million years ago, coincident with the early evolution of Homo and technological innovations evident in the archeological record. But we also find that human brain size reduction was surprisingly recent, occurring in the last 3,000 years. Our dating does not support hypotheses concerning brain size reduction as a by-product of body size reduction, a result of a shift to an agricultural diet, or a consequence of self-domestication. We suggest our analysis supports the hypothesis that the recent decrease in brain size may instead result from the externalization of knowledge and advantages of group-level decision-making due in part to the advent of social systems of distributed cognition and the storage and sharing of information. Humans live in social groups in which multiple brains contribute to the emergence of collective intelligence. Although difficult to study in the deep history of Homo, the impacts of group size, social organization, collective intelligence and other potential selective forces on brain evolution can be elucidated using ants as models. The remarkable ecological diversity of ants and their species richness encompasses forms convergent in aspects of human sociality, including large group size, agrarian life histories, division of labor, and collective cognition. Ants provide a wide range of social systems to generate and test hypotheses concerning brain size enlargement or reduction and aid in interpreting patterns of brain evolution identified in humans. Although humans and ants represent very different routes in social and cognitive evolution, the insights ants offer can broadly inform us of the selective forces that influence brain size.
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]