Search (377 results, page 1 of 19)

  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.06
    0.055722963 = product of:
      0.08358444 = sum of:
        0.019940332 = weight(_text_:of in 4550) [ClassicSimilarity], result of:
          0.019940332 = score(doc=4550,freq=16.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.24433708 = fieldWeight in 4550, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.06364411 = sum of:
          0.028290104 = weight(_text_:science in 4550) [ClassicSimilarity], result of:
            0.028290104 = score(doc=4550,freq=4.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.20579056 = fieldWeight in 4550, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4550)
          0.03535401 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
            0.03535401 = score(doc=4550,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.19345059 = fieldWeight in 4550, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4550)
      0.6666667 = coord(2/3)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  2. Freyberg, L.: ¬Die Lesbarkeit der Welt : Rezension zu 'The Concept of Information in Library and Information Science. A Field in Search of Its Boundaries: 8 Short Comments Concerning Information'. In: Cybernetics and Human Knowing. Vol. 22 (2015), 1, 57-80. Kurzartikel von Luciano Floridi, Søren Brier, Torkild Thellefsen, Martin Thellefsen, Bent Sørensen, Birger Hjørland, Brenda Dervin, Ken Herold, Per Hasle und Michael Buckland (2016) 0.05
    0.05111937 = product of:
      0.07667905 = sum of:
        0.012611373 = weight(_text_:of in 3335) [ClassicSimilarity], result of:
          0.012611373 = score(doc=3335,freq=10.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.15453234 = fieldWeight in 3335, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=3335)
        0.06406768 = sum of:
          0.035784468 = weight(_text_:science in 3335) [ClassicSimilarity], result of:
            0.035784468 = score(doc=3335,freq=10.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.26030678 = fieldWeight in 3335, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.03125 = fieldNorm(doc=3335)
          0.028283209 = weight(_text_:22 in 3335) [ClassicSimilarity], result of:
            0.028283209 = score(doc=3335,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.15476047 = fieldWeight in 3335, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3335)
      0.6666667 = coord(2/3)
    
    Abstract
    Es ist wieder an der Zeit den Begriff "Information" zu aktualisieren beziehungsweise einen Bericht zum Status Quo zu liefern. Information ist der zentrale Gegenstand der Informationswissenschaft und stellt einen der wichtigsten Forschungsgegenstände der Bibliotheks- und Informationswissenschaft dar. Erstaunlicherweise findet jedoch ein stetiger Diskurs, der mit der kritischen Auseinandersetzung und der damit verbundenen Aktualisierung von Konzepten in den Geisteswissensschaften vergleichbar ist, zumindest im deutschsprachigen Raum1 nicht konstant statt. Im Sinne einer theoretischen Grundlagenforschung und zur Erarbeitung einer gemeinsamen begrifflichen Matrix wäre dies aber sicherlich wünschenswert. Bereits im letzten Jahr erschienen in dem von Søren Brier (Siehe "The foundation of LIS in information science and semiotics"2 sowie "Semiotics in Information Science. An Interview with Søren Brier on the application of semiotic theories and the epistemological problem of a transdisciplinary Information Science"3) herausgegebenen Journal "Cybernetics and Human Knowing" acht lesenswerte Stellungnahmen von namhaften Philosophen beziehungsweise Bibliotheks- und Informationswissenschaftlern zum Begriff der Information. Unglücklicherweise ist das Journal "Cybernetics & Human Knowing" in Deutschland schwer zugänglich, da es sich nicht um ein Open-Access-Journal handelt und lediglich von acht deutschen Bibliotheken abonniert wird.4 Aufgrund der schlechten Verfügbarkeit scheint es sinnvoll hier eine ausführliche Besprechung dieser acht Kurzartikel anzubieten.
    Das Journal, das sich laut Zusatz zum Hauptsachtitel thematisch mit "second order cybernetics, autopoiesis and cyber-semiotics" beschäftigt, existiert seit 1992/93 als Druckausgabe. Seit 1998 (Jahrgang 5, Heft 1) wird es parallel kostenpflichtig elektronisch im Paket über den Verlag Imprint Academic in Exeter angeboten. Das Konzept Information wird dort aufgrund der Ausrichtung, die man als theoretischen Beitrag zu den Digital Humanities (avant la lettre) ansehen könnte, regelmäßig behandelt. Insbesondere die phänomenologisch und mathematisch fundierte Semiotik von Charles Sanders Peirce taucht in diesem Zusammenhang immer wieder auf. Dabei spielt stets die Verbindung zur Praxis, vor allem im Bereich Library- and Information Science (LIS), eine große Rolle, die man auch bei Brier selbst, der in seinem Hauptwerk "Cybersemiotics" die Peirceschen Zeichenkategorien unter anderem auf die bibliothekarische Tätigkeit des Indexierens anwendet,5 beobachten kann. Die Ausgabe 1/ 2015 der Zeitschrift fragt nun "What underlines Information?" und beinhaltet unter anderem Artikel zum Entwurf einer Philosophie der Information des Chinesen Wu Kun sowie zu Peirce und Spencer Brown. Die acht Kurzartikel zum Informationsbegriff in der Bibliotheks- und Informationswissenschaft wurden von den Thellefsen-Brüdern (Torkild und Martin) sowie Bent Sørensen, die auch selbst gemeinsam einen der Kommentare verfasst haben.
  3. Open MIND (2015) 0.05
    0.04934041 = product of:
      0.07401061 = sum of:
        0.018652473 = weight(_text_:of in 1648) [ClassicSimilarity], result of:
          0.018652473 = score(doc=1648,freq=14.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.22855641 = fieldWeight in 1648, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1648)
        0.055358134 = sum of:
          0.020004123 = weight(_text_:science in 1648) [ClassicSimilarity], result of:
            0.020004123 = score(doc=1648,freq=2.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.1455159 = fieldWeight in 1648, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1648)
          0.03535401 = weight(_text_:22 in 1648) [ClassicSimilarity], result of:
            0.03535401 = score(doc=1648,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.19345059 = fieldWeight in 1648, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1648)
      0.6666667 = coord(2/3)
    
    Abstract
    This is an edited collection of 39 original papers and as many commentaries and replies. The target papers and replies were written by senior members of the MIND Group, while all commentaries were written by junior group members. All papers and commentaries have undergone a rigorous process of anonymous peer review, during which the junior members of the MIND Group acted as reviewers. The final versions of all the target articles, commentaries and replies have undergone additional editorial review. Besides offering a cross-section of ongoing, cutting-edge research in philosophy and cognitive science, this collection is also intended to be a free electronic resource for teaching. It therefore also contains a selection of online supporting materials, pointers to video and audio files and to additional free material supplied by the 92 authors represented in this volume. We will add more multimedia material, a searchable literature database, and tools to work with the online version in the future. All contributions to this collection are strictly open access. They can be downloaded, printed, and reproduced by anyone.
    Date
    27. 1.2015 11:48:22
  4. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.05
    0.046049424 = product of:
      0.13814826 = sum of:
        0.13814826 = product of:
          0.41444477 = sum of:
            0.41444477 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.41444477 = score(doc=1826,freq=2.0), product of:
                0.4424535 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05218836 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  5. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.04
    0.043186974 = product of:
      0.06478046 = sum of:
        0.022779156 = weight(_text_:of in 3035) [ClassicSimilarity], result of:
          0.022779156 = score(doc=3035,freq=58.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.27912235 = fieldWeight in 3035, product of:
              7.615773 = tf(freq=58.0), with freq of:
                58.0 = termFreq=58.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3035)
        0.042001307 = sum of:
          0.020788899 = weight(_text_:science in 3035) [ClassicSimilarity], result of:
            0.020788899 = score(doc=3035,freq=6.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.15122458 = fieldWeight in 3035, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3035)
          0.021212406 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
            0.021212406 = score(doc=3035,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.116070345 = fieldWeight in 3035, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3035)
      0.6666667 = coord(2/3)
    
    Abstract
    A PICTURE is said to be worth a thousand words. That metaphor might be expected to pertain a fortiori in the case of scientific papers, where a figure can brilliantly illuminate an idea that might otherwise be baffling. Papers with figures in them should thus be easier to grasp than those without. They should therefore reach larger audiences and, in turn, be more influential simply by virtue of being more widely read. But are they?
    Content
    Bill Howe and his colleagues at the University of Washington, in Seattle, decided to find out. First, they trained a computer algorithm to distinguish between various sorts of figures-which they defined as diagrams, equations, photographs, plots (such as bar charts and scatter graphs) and tables. They exposed their algorithm to between 400 and 600 images of each of these types of figure until it could distinguish them with an accuracy greater than 90%. Then they set it loose on the more-than-650,000 papers (containing more than 10m figures) stored on PubMed Central, an online archive of biomedical-research articles. To measure each paper's influence, they calculated its article-level Eigenfactor score-a modified version of the PageRank algorithm Google uses to provide the most relevant results for internet searches. Eigenfactor scoring gives a better measure than simply noting the number of times a paper is cited elsewhere, because it weights citations by their influence. A citation in a paper that is itself highly cited is worth more than one in a paper that is not.
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
    Dr Howe and his colleagues do, however, believe that the study of diagrams can result in new insights. A figure showing new metabolic pathways in a cell, for example, may summarise hundreds of experiments. Since illustrations can convey important scientific concepts in this way, they think that browsing through related figures from different papers may help researchers come up with new theories. As Dr Howe puts it, "the unit of scientific currency is closer to the figure than to the paper." With this thought in mind, the team have created a website (viziometrics.org (http://viziometrics.org/) ) where the millions of images sorted by their program can be searched using key words. Their next plan is to extract the information from particular types of scientific figure, to create comprehensive "super" figures: a giant network of all the known chemical processes in a cell for example, or the best-available tree of life. At just one such superfigure per paper, though, the citation records of articles containing such all-embracing diagrams may very well undermine the correlation that prompted their creation in the first place. Call it the ultimate marriage of chart and science.
    Footnote
    Vgl.: http://www.economist.com/news/science-and-technology/21700620-surprisingly-simple-test-check-research-papers-errors-come-again.
  6. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.04
    0.04236927 = product of:
      0.0635539 = sum of:
        0.028199887 = weight(_text_:of in 5865) [ClassicSimilarity], result of:
          0.028199887 = score(doc=5865,freq=8.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.34554482 = fieldWeight in 5865, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=5865)
        0.03535401 = product of:
          0.07070802 = sum of:
            0.07070802 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.07070802 = score(doc=5865,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  7. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.04
    0.038751446 = product of:
      0.058127165 = sum of:
        0.029843956 = weight(_text_:of in 1149) [ClassicSimilarity], result of:
          0.029843956 = score(doc=1149,freq=14.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.36569026 = fieldWeight in 1149, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1149)
        0.028283209 = product of:
          0.056566417 = sum of:
            0.056566417 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.056566417 = score(doc=1149,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
  8. Beebe, N.H.F.: ¬A complete bibliography of the Journal for General Philosophy of Science / Zeitschrift für allgemeine Wissenschaftstheorie (2019) 0.04
    0.037281495 = product of:
      0.05592224 = sum of:
        0.027916465 = weight(_text_:of in 3991) [ClassicSimilarity], result of:
          0.027916465 = score(doc=3991,freq=4.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.34207192 = fieldWeight in 3991, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=3991)
        0.028005775 = product of:
          0.05601155 = sum of:
            0.05601155 = weight(_text_:science in 3991) [ClassicSimilarity], result of:
              0.05601155 = score(doc=3991,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.40744454 = fieldWeight in 3991, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3991)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  9. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.04
    0.035951518 = product of:
      0.053927273 = sum of:
        0.023928396 = weight(_text_:of in 1967) [ClassicSimilarity], result of:
          0.023928396 = score(doc=1967,freq=16.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.2932045 = fieldWeight in 1967, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.029998874 = product of:
          0.05999775 = sum of:
            0.05999775 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.05999775 = score(doc=1967,freq=4.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  10. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.03
    0.03472931 = product of:
      0.05209396 = sum of:
        0.017095273 = weight(_text_:of in 3450) [ClassicSimilarity], result of:
          0.017095273 = score(doc=3450,freq=6.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.20947541 = fieldWeight in 3450, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3450)
        0.034998685 = product of:
          0.06999737 = sum of:
            0.06999737 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.06999737 = score(doc=3450,freq=4.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The design and architecture of MIaS (Math Indexer and Searcher), a system for mathematics retrieval is presented, and design decisions are discussed. We argue for an approach based on Presentation MathML using a similarity of math subformulae. The system was implemented as a math-aware search engine based on the state-ofthe-art system Apache Lucene. Scalability issues were checked against more than 400,000 arXiv documents with 158 million mathematical formulae. Almost three billion MathML subformulae were indexed using a Solr-compatible Lucene.
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  11. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.03
    0.032847296 = product of:
      0.049270943 = sum of:
        0.028058534 = weight(_text_:of in 4649) [ClassicSimilarity], result of:
          0.028058534 = score(doc=4649,freq=22.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.34381276 = fieldWeight in 4649, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.021212406 = product of:
          0.042424813 = sum of:
            0.042424813 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.042424813 = score(doc=4649,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  12. ¬The Computer Science Ontology (CSO) (2018) 0.03
    0.032614514 = product of:
      0.04892177 = sum of:
        0.02442182 = weight(_text_:of in 4429) [ClassicSimilarity], result of:
          0.02442182 = score(doc=4429,freq=24.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.2992506 = fieldWeight in 4429, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4429)
        0.024499949 = product of:
          0.048999898 = sum of:
            0.048999898 = weight(_text_:science in 4429) [ClassicSimilarity], result of:
              0.048999898 = score(doc=4429,freq=12.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.3564397 = fieldWeight in 4429, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4429)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Computer Science Ontology (CSO) is a large-scale ontology of research areas that was automatically generated using the Klink-2 algorithm on the Rexplore dataset, which consists of about 16 million publications, mainly in the field of Computer Science. The Klink-2 algorithm combines semantic technologies, machine learning, and knowledge from external sources to automatically generate a fully populated ontology of research areas. Some relationships were also revised manually by experts during the preparation of two ontology-assisted surveys in the field of Semantic Web and Software Architecture. The main root of CSO is Computer Science, however, the ontology includes also a few secondary roots, such as Linguistics, Geometry, Semantics, and so on. CSO presents two main advantages over manually crafted categorisations used in Computer Science (e.g., 2012 ACM Classification, Microsoft Academic Search Classification). First, it can characterise higher-level research areas by means of hundreds of sub-topics and related terms, which enables to map very specific terms to higher-level research areas. Secondly, it can be easily updated by running Klink-2 on a set of new publications. A more comprehensive discussion of the advantages of adopting an automatically generated ontology in the scholarly domain can be found in.
    Object
    Computer Science Ontology
  13. Buranyi, S.: Is the staggeringly profitable business of scientific publishing bad for science? (2017) 0.03
    0.032286722 = product of:
      0.04843008 = sum of:
        0.024176367 = weight(_text_:of in 3711) [ClassicSimilarity], result of:
          0.024176367 = score(doc=3711,freq=12.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.29624295 = fieldWeight in 3711, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3711)
        0.024253715 = product of:
          0.04850743 = sum of:
            0.04850743 = weight(_text_:science in 3711) [ClassicSimilarity], result of:
              0.04850743 = score(doc=3711,freq=6.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.35285735 = fieldWeight in 3711, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3711)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    It is an industry like no other, with profit margins to rival Google - and it was created by one of Britain's most notorious tycoons: Robert Maxwell. "Even scientists who are fighting for reform are often not aware of the roots of the system: how, in the boom years after the second world war, entrepreneurs built fortunes by taking publishing out of the hands of scientists and expanding the business on a previously unimaginable scale. And no one was more transformative and ingenious than Robert Maxwell, who turned scientific journals into a spectacular money-making machine that bankrolled his rise in British society."
    Source
    https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science
  14. Calculating the h-index : Web of Science, Scopus or Google Scholar? (2011) 0.03
    0.032153625 = product of:
      0.048230436 = sum of:
        0.019940332 = weight(_text_:of in 854) [ClassicSimilarity], result of:
          0.019940332 = score(doc=854,freq=4.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.24433708 = fieldWeight in 854, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=854)
        0.028290104 = product of:
          0.05658021 = sum of:
            0.05658021 = weight(_text_:science in 854) [ClassicSimilarity], result of:
              0.05658021 = score(doc=854,freq=4.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.41158113 = fieldWeight in 854, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.078125 = fieldNorm(doc=854)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Object
    Web of Science
  15. Publish and don't be damned : some science journals that claim to peer review papers do not do so (2018) 0.03
    0.031955566 = product of:
      0.047933348 = sum of:
        0.023928396 = weight(_text_:of in 4333) [ClassicSimilarity], result of:
          0.023928396 = score(doc=4333,freq=16.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.2932045 = fieldWeight in 4333, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.02400495 = product of:
          0.0480099 = sum of:
            0.0480099 = weight(_text_:science in 4333) [ClassicSimilarity], result of:
              0.0480099 = score(doc=4333,freq=8.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.34923816 = fieldWeight in 4333, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4333)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    One estimate puts the number of papers in questionable journals at 400,000.
    Content
    "Whether to get a promotion or merely a foot in the door, academics have long known that they must publish papers, typically the more the better. Tallying scholarly publications to evaluate their authors has been common since the invention of scientific journals in the 17th century. So, too, has the practice of journal editors asking independent, usually anonymous, experts to scrutinise manuscripts and reject those deemed flawed-a quality-control process now known as peer review. Of late, however, this habit of according importance to papers labelled as "peer reviewed" has become something of a gamble. A rising number of journals that claim to review submissions in this way do not bother to do so. Not coincidentally, this seems to be leading some academics to inflate their publication lists with papers that might not pass such scrutiny."
    Footnote
    This article appeared in the Science and technology section of the print edition under the headline "Publish and don't be damned".
    Source
    https://www.economist.com/science-and-technology/2018/06/23/some-science-journals-that-claim-to-peer-review-papers-do-not-do-so
  16. Academic publishing : No peeking (2014) 0.03
    0.031830467 = product of:
      0.047745697 = sum of:
        0.01973992 = weight(_text_:of in 805) [ClassicSimilarity], result of:
          0.01973992 = score(doc=805,freq=2.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.24188137 = fieldWeight in 805, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=805)
        0.028005775 = product of:
          0.05601155 = sum of:
            0.05601155 = weight(_text_:science in 805) [ClassicSimilarity], result of:
              0.05601155 = score(doc=805,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.40744454 = fieldWeight in 805, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.109375 = fieldNorm(doc=805)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A publishing giant goes after the authors of its journals' papers
    Series
    Science and technology
  17. Albinus, L.: Can science cope with more than one world? : a cross-reading of Habermas, Popper, and Searle (2013) 0.03
    0.030921802 = product of:
      0.046382703 = sum of:
        0.02637858 = weight(_text_:of in 4520) [ClassicSimilarity], result of:
          0.02637858 = score(doc=4520,freq=28.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.32322758 = fieldWeight in 4520, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4520)
        0.020004123 = product of:
          0.040008247 = sum of:
            0.040008247 = weight(_text_:science in 4520) [ClassicSimilarity], result of:
              0.040008247 = score(doc=4520,freq=8.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.2910318 = fieldWeight in 4520, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4520)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The purpose of this article is to critically assess the 'three-world theory' as it is presented-with some slight but decisive differences-by Ju¨rgen Habermas and Karl Popper. This theory presents the philosophy of science with a conceptual and material problem, insofar as it claims that science has no single access to all aspects of the world. Although I will try to demonstrate advantages of Popper's idea of 'the third world' of ideas, the shortcomings of his ontological stance become visible from the pragmatic point of view in Habermas's theory of communicative acts. With regard to the critique that the three-world theory has met in both its pragmatic and ontological versions, I will take a closer look at John Searle's naturalistic counter-position. By teasing out some problematic implications in his theory of causation, I aim to show that Searle's approach is, in fact, much closer to Popper's than he might think. Finally, while condoning Habermas's distinction between the natural world and the lifeworld, I will opt for a pragmatically differentiated view of 'the real', rather than speaking of different worlds.
    Source
    Journal for general philosophy of science / Zeitschrift für allgemeine Wissenschaftstheorie. 44(2013) no.1, S.3-20
  18. Perovsek, M.; Kranjca, J.; Erjaveca, T.; Cestnika, B.; Lavraca, N.: TextFlows : a visual programming platform for text mining and natural language processing (2016) 0.03
    0.029811531 = product of:
      0.044717297 = sum of:
        0.023928396 = weight(_text_:of in 2697) [ClassicSimilarity], result of:
          0.023928396 = score(doc=2697,freq=16.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.2932045 = fieldWeight in 2697, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2697)
        0.020788899 = product of:
          0.041577797 = sum of:
            0.041577797 = weight(_text_:science in 2697) [ClassicSimilarity], result of:
              0.041577797 = score(doc=2697,freq=6.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.30244917 = fieldWeight in 2697, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2697)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Text mining and natural language processing are fast growing areas of research, with numerous applications in business, science and creative industries. This paper presents TextFlows, a web-based text mining and natural language processing platform supporting workflow construction, sharing and execution. The platform enables visual construction of text mining workflows through a web browser, and the execution of the constructed workflows on a processing cloud. This makes TextFlows an adaptable infrastructure for the construction and sharing of text processing workflows, which can be reused in various applications. The paper presents the implemented text mining and language processing modules, and describes some precomposed workflows. Their features are demonstrated on three use cases: comparison of document classifiers and of different part-of-speech taggers on a text categorization problem, and outlier detection in document corpora.
    Content
    Vgl.: http://www.sciencedirect.com/science/article/pii/S0167642316000113. Vgl. auch: http://textflows.org.
    Source
    Science of computer programming. In Press, 2016
  19. Kashyap, M.M.: Application of integrative approach in the teaching of library science techniques and application of information technology (2011) 0.03
    0.02956405 = product of:
      0.04434607 = sum of:
        0.02645384 = weight(_text_:of in 4395) [ClassicSimilarity], result of:
          0.02645384 = score(doc=4395,freq=44.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.3241498 = fieldWeight in 4395, product of:
              6.6332498 = tf(freq=44.0), with freq of:
                44.0 = termFreq=44.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=4395)
        0.017892234 = product of:
          0.035784468 = sum of:
            0.035784468 = weight(_text_:science in 4395) [ClassicSimilarity], result of:
              0.035784468 = score(doc=4395,freq=10.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.26030678 = fieldWeight in 4395, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4395)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Today many libraries are using computers and allied information technologies to improve their work methods and services. Consequently, the libraries need such professional staff, or need to train the present one, who could face the challenges placed by the introduction of these technologies in the libraries. To meet the demand of such professional staff, the departments of Library and Information Science in India introduced new courses of studies to expose their students in the use and application of computers and other allied technologies. Some courses introduced are: Computer Application in Libraries; Systems Analysis and Design Technique; Design and Development of Computer-based Library Information Systems; Database Organisation and Design; Library Networking; Use and Application of Communication Technology, and so forth. It is felt that the computer and information technologies biased courses need to be restructured, revised, and more harmoniously blended with the traditional main stream courses of library and information science discipline. We must alter the strategy of teaching library techniques, such as classification, cataloguing, and library procedures, and the techniques of designing computer-based library information systems and services. The use and application of these techniques get interwoven when we shift from a manually operated library system's environment to computer-based library system's environment. As such, it becomes necessary that we must follow an integrative approach, when we teach these techniques to the students of library and information science or train library staff in the use and application of these techniques to design, develop and implement computer-based library information systems and services. In the following sections of this paper, we shall outline the likeness or correspondence between certain concepts and techniques formed by computer specialist and the one developed by the librarians, in their respective domains. We make use of these techniques (i.e. the techniques of both the domains) in the design and implementation of computer-based library information systems and services. As such, it is essential that lessons of study concerning the exposition of these supplementary and complementary techniques must be integrated.
    Source
    http://lisuncg.net/icl/blogs-news/madan-mohan-kashyap/2011/01/20/application-integrative-approach-teaching-library-science-
  20. Voß, J.: Classification of knowledge organization systems with Wikidata (2016) 0.03
    0.029063582 = product of:
      0.043595374 = sum of:
        0.022382967 = weight(_text_:of in 3082) [ClassicSimilarity], result of:
          0.022382967 = score(doc=3082,freq=14.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.2742677 = fieldWeight in 3082, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3082)
        0.021212406 = product of:
          0.042424813 = sum of:
            0.042424813 = weight(_text_:22 in 3082) [ClassicSimilarity], result of:
              0.042424813 = score(doc=3082,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.23214069 = fieldWeight in 3082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3082)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper presents a crowd-sourced classification of knowledge organization systems based on open knowledge base Wikidata. The focus is less on the current result in its rather preliminary form but on the environment and process of categorization in Wikidata and the extraction of KOS from the collaborative database. Benefits and disadvantages are summarized and discussed for application to knowledge organization of other subject areas with Wikidata.
    Pages
    S.15-22
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]

Languages

  • e 284
  • d 78
  • i 6
  • f 2
  • a 1
  • el 1
  • es 1
  • More… Less…

Types

  • a 248
  • s 14
  • x 10
  • r 8
  • m 4
  • n 2
  • b 1
  • i 1
  • More… Less…