Search (180 results, page 9 of 9)

  • × type_ss:"a"
  • × type_ss:"el"
  1. Zia, L.L.: ¬The NSF National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) Program : new projects from fiscal year 2004 (2005) 0.00
    0.0044258563 = product of:
      0.008851713 = sum of:
        0.008851713 = product of:
          0.026555136 = sum of:
            0.026555136 = weight(_text_:k in 1221) [ClassicSimilarity], result of:
              0.026555136 = score(doc=1221,freq=4.0), product of:
                0.15869603 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04445543 = queryNorm
                0.16733333 = fieldWeight in 1221, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1221)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    These three elements reflect a refinement of NSDL's initial emphasis on collecting educational resources, materials, and other digital learning objects, towards enabling learners to "connect" or otherwise find pathways to resources appropriate to their needs. Projects are also developing both the capacities of individual users and the capacity of larger communities of learners to use and contribute to NSDL. For the FY2004 funding cycle, one hundred forty-four proposals sought approximately $126.5 million in total funding. Twenty-four new awards were made with a cumulative budget of approximately $10.2 million. These include four in the Pathways track, twelve in the Services track, and eight in the Targeted Research track. As in the earlier years of the program, sister directorates to the NSF Directorate for Education and Human Resources (EHR) are providing significant co-funding of projects. Participating directorates for FY2004 are GEO and MPS. Within EHR, the Advanced Technological Education program and the Experimental Program to Stimulate Competitive Research are also co-funding projects. Complete information on the technical and organizational progress of NSDL including links to current Standing Committees and community workspaces may be found at <http://nsdl.org/community/nsdlgroups.php>. All workspaces are open to the public, and interested organizations and individuals are encouraged to learn more about NSDL and join in its development. Following is a list of the new FY04 awards displaying the official NSF award number, the project title, the grantee institution, and the name of the Principal Investigator (PI). A condensed description of the project is also included. Full abstracts are available from the NSDL program site (under Related URLs see the link to NSDL program site (under Related URLs see the link to Abstracts of Recent Awards Made Through This Program.) The projects are displayed by track and are listed by award number. In addition, seven of these projects have explicit relevance to applications to pre-K to 12 education (indicated with a * below). Four others have clear potential for application to the pre-K to 12 arena (indicated with a ** below).
  2. RWI/PH: Auf der Suche nach dem entscheidenden Wort : die Häufung bestimmter Wörter innerhalb eines Textes macht diese zu Schlüsselwörtern (2012) 0.00
    0.0044258563 = product of:
      0.008851713 = sum of:
        0.008851713 = product of:
          0.026555136 = sum of:
            0.026555136 = weight(_text_:k in 331) [ClassicSimilarity], result of:
              0.026555136 = score(doc=331,freq=4.0), product of:
                0.15869603 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04445543 = queryNorm
                0.16733333 = fieldWeight in 331, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=331)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Dresdner Wissenschaftler haben die semantischen Eigenschaften von Texten mathematisch untersucht, indem sie zehn verschiedene englische Texte in unterschiedlichen Formen kodierten. Dazu zählt unter anderem die englische Ausgabe von Leo Tolstois "Krieg und Frieden". Beispielsweise übersetzten die Forscher Buchstaben innerhalb eines Textes in eine Binär-Sequenz. Dazu ersetzten sie alle Vokale durch eine Eins und alle Konsonanten durch eine Null. Mit Hilfe weiterer mathematischer Funktionen beleuchteten die Wissenschaftler dabei verschiedene Ebenen des Textes, also sowohl einzelne Vokale, Buchstaben als auch ganze Wörter, die in verschiedenen Formen kodiert wurden. Innerhalb des ganzen Textes lassen sich so wiederkehrende Muster finden. Diesen Zusammenhang innerhalb des Textes bezeichnet man als Langzeitkorrelation. Diese gibt an, ob zwei Buchstaben an beliebig weit voneinander entfernten Textstellen miteinander in Verbindung stehen - beispielsweise gibt es wenn wir an einer Stelle einen Buchstaben "K" finden, eine messbare höhere Wahrscheinlichkeit den Buchstaben "K" einige Seiten später nochmal zu finden. "Es ist zu erwarten, dass wenn es in einem Buch an einer Stelle um Krieg geht, die Wahrscheinlichkeit hoch ist das Wort Krieg auch einige Seiten später zu finden. Überraschend ist es, dass wir die hohe Wahrscheinlichkeit auch auf der Buchstabenebene finden", so Altmann.
  3. Henze, V.; Junger, U.; Mödden, E.: Grundzüge und erste Schritte der künftigen inhaltlichen Erschliessung von Publikationen in der Deutschen Nationalbibliothek (2017) 0.00
    0.0041727372 = product of:
      0.0083454745 = sum of:
        0.0083454745 = product of:
          0.025036423 = sum of:
            0.025036423 = weight(_text_:k in 3772) [ClassicSimilarity], result of:
              0.025036423 = score(doc=3772,freq=2.0), product of:
                0.15869603 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04445543 = queryNorm
                0.15776339 = fieldWeight in 3772, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3772)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl. auch: Ceynowa, K.: In Frankfurt lesen jetzt zuerst Maschinen. Unter: http://www.faz.net/aktuell/feuilleton/buecher/maschinen-lesen-buecher-deutsche-nationalbibliothek-setzt-auf-technik-15128954.html?printPagedArticle=true#pageIndex_2.
  4. Harnett, K.: Machine learning confronts the elephant in the room : a visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take (2018) 0.00
    0.0041727372 = product of:
      0.0083454745 = sum of:
        0.0083454745 = product of:
          0.025036423 = sum of:
            0.025036423 = weight(_text_:k in 4449) [ClassicSimilarity], result of:
              0.025036423 = score(doc=4449,freq=2.0), product of:
                0.15869603 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04445543 = queryNorm
                0.15776339 = fieldWeight in 4449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4449)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  5. Aleksander, K.; Bucher, M.; Dornick, S.; Franke-Maier, M.; Strickert, M.: Mehr Wissen sichtbar machen : Inhaltserschließung in Bibliotheken und alternative Zukünfte. (2022) 0.00
    0.0041727372 = product of:
      0.0083454745 = sum of:
        0.0083454745 = product of:
          0.025036423 = sum of:
            0.025036423 = weight(_text_:k in 677) [ClassicSimilarity], result of:
              0.025036423 = score(doc=677,freq=2.0), product of:
                0.15869603 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04445543 = queryNorm
                0.15776339 = fieldWeight in 677, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=677)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  6. Baker, T.: ¬A grammar of Dublin Core (2000) 0.00
    0.004015398 = product of:
      0.008030796 = sum of:
        0.008030796 = product of:
          0.02409239 = sum of:
            0.02409239 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.02409239 = score(doc=1236,freq=2.0), product of:
                0.15567535 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04445543 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    26.12.2011 14:01:22
  7. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.00
    0.004015398 = product of:
      0.008030796 = sum of:
        0.008030796 = product of:
          0.02409239 = sum of:
            0.02409239 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.02409239 = score(doc=3284,freq=2.0), product of:
                0.15567535 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04445543 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 1.2010 14:41:24
  8. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.00
    0.004015398 = product of:
      0.008030796 = sum of:
        0.008030796 = product of:
          0.02409239 = sum of:
            0.02409239 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
              0.02409239 = score(doc=1163,freq=2.0), product of:
                0.15567535 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04445543 = queryNorm
                0.15476047 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  9. Freyberg, L.: ¬Die Lesbarkeit der Welt : Rezension zu 'The Concept of Information in Library and Information Science. A Field in Search of Its Boundaries: 8 Short Comments Concerning Information'. In: Cybernetics and Human Knowing. Vol. 22 (2015), 1, 57-80. Kurzartikel von Luciano Floridi, Søren Brier, Torkild Thellefsen, Martin Thellefsen, Bent Sørensen, Birger Hjørland, Brenda Dervin, Ken Herold, Per Hasle und Michael Buckland (2016) 0.00
    0.004015398 = product of:
      0.008030796 = sum of:
        0.008030796 = product of:
          0.02409239 = sum of:
            0.02409239 = weight(_text_:22 in 3335) [ClassicSimilarity], result of:
              0.02409239 = score(doc=3335,freq=2.0), product of:
                0.15567535 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04445543 = queryNorm
                0.15476047 = fieldWeight in 3335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3335)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  10. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.00
    0.004015398 = product of:
      0.008030796 = sum of:
        0.008030796 = product of:
          0.02409239 = sum of:
            0.02409239 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
              0.02409239 = score(doc=3608,freq=2.0), product of:
                0.15567535 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04445543 = queryNorm
                0.15476047 = fieldWeight in 3608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3608)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
  11. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.00
    0.004015398 = product of:
      0.008030796 = sum of:
        0.008030796 = product of:
          0.02409239 = sum of:
            0.02409239 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.02409239 = score(doc=4217,freq=2.0), product of:
                0.15567535 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04445543 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 1.2018 11:32:44
  12. Jäger, L.: Von Big Data zu Big Brother (2018) 0.00
    0.004015398 = product of:
      0.008030796 = sum of:
        0.008030796 = product of:
          0.02409239 = sum of:
            0.02409239 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
              0.02409239 = score(doc=5234,freq=2.0), product of:
                0.15567535 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04445543 = queryNorm
                0.15476047 = fieldWeight in 5234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5234)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 1.2018 11:33:49
  13. Markey, K.: ¬The online library catalog : paradise lost and paradise regained? (2007) 0.00
    0.0036511454 = product of:
      0.0073022908 = sum of:
        0.0073022908 = product of:
          0.021906871 = sum of:
            0.021906871 = weight(_text_:k in 1172) [ClassicSimilarity], result of:
              0.021906871 = score(doc=1172,freq=2.0), product of:
                0.15869603 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04445543 = queryNorm
                0.13804297 = fieldWeight in 1172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1172)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  14. Bertolucci, K.: Happiness is taxonomy : four structures for Snoopy - libraries' method of categorizing and classification (2003) 0.00
    0.003129553 = product of:
      0.006259106 = sum of:
        0.006259106 = product of:
          0.018777318 = sum of:
            0.018777318 = weight(_text_:k in 1212) [ClassicSimilarity], result of:
              0.018777318 = score(doc=1212,freq=2.0), product of:
                0.15869603 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04445543 = queryNorm
                0.118322544 = fieldWeight in 1212, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1212)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  15. Choudhury, G.S.; DiLauro, T.; Droettboom, M.; Fujinaga, I.; MacMillan, K.: Strike up the score : deriving searchable and playable digital formats from sheet music (2001) 0.00
    0.003129553 = product of:
      0.006259106 = sum of:
        0.006259106 = product of:
          0.018777318 = sum of:
            0.018777318 = weight(_text_:k in 1220) [ClassicSimilarity], result of:
              0.018777318 = score(doc=1220,freq=2.0), product of:
                0.15869603 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04445543 = queryNorm
                0.118322544 = fieldWeight in 1220, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1220)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  16. Foerster, H. von; Müller, A.; Müller, K.H.: Rück- und Vorschauen : Heinz von Foerster im Gespräch mit Albert Müller und Karl H. Müller (2001) 0.00
    0.0030115487 = product of:
      0.0060230973 = sum of:
        0.0060230973 = product of:
          0.018069291 = sum of:
            0.018069291 = weight(_text_:22 in 5988) [ClassicSimilarity], result of:
              0.018069291 = score(doc=5988,freq=2.0), product of:
                0.15567535 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04445543 = queryNorm
                0.116070345 = fieldWeight in 5988, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=5988)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    10. 9.2006 17:22:54
  17. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.00
    0.0030115487 = product of:
      0.0060230973 = sum of:
        0.0060230973 = product of:
          0.018069291 = sum of:
            0.018069291 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.018069291 = score(doc=1184,freq=2.0), product of:
                0.15567535 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04445543 = queryNorm
                0.116070345 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1184)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    26.12.2011 14:08:22
  18. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.00
    0.0030115487 = product of:
      0.0060230973 = sum of:
        0.0060230973 = product of:
          0.018069291 = sum of:
            0.018069291 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
              0.018069291 = score(doc=3035,freq=2.0), product of:
                0.15567535 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04445543 = queryNorm
                0.116070345 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3035)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
  19. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.00
    0.0030115487 = product of:
      0.0060230973 = sum of:
        0.0060230973 = product of:
          0.018069291 = sum of:
            0.018069291 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.018069291 = score(doc=405,freq=2.0), product of:
                0.15567535 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04445543 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  20. Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003) 0.00
    0.0026079607 = product of:
      0.0052159214 = sum of:
        0.0052159214 = product of:
          0.015647763 = sum of:
            0.015647763 = weight(_text_:k in 1166) [ClassicSimilarity], result of:
              0.015647763 = score(doc=1166,freq=2.0), product of:
                0.15869603 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04445543 = queryNorm
                0.098602116 = fieldWeight in 1166, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1166)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    

Years

Languages

  • d 90
  • e 87
  • a 1
  • sp 1
  • More… Less…