Search (1749 results, page 1 of 88)

  • × year_i:[2000 TO 2010}
  1. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.45
    0.4463537 = product of:
      0.7811189 = sum of:
        0.078111894 = product of:
          0.23433568 = sum of:
            0.23433568 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.23433568 = score(doc=140,freq=2.0), product of:
                0.3127155 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036885478 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.33333334 = coord(1/3)
        0.23433568 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.23433568 = score(doc=140,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
        0.23433568 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.23433568 = score(doc=140,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
        0.23433568 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.23433568 = score(doc=140,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
      0.5714286 = coord(4/7)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.43
    0.42916542 = product of:
      0.60083157 = sum of:
        0.058583915 = product of:
          0.17575175 = sum of:
            0.17575175 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.17575175 = score(doc=562,freq=2.0), product of:
                0.3127155 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036885478 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.17575175 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17575175 = score(doc=562,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.17575175 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17575175 = score(doc=562,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.17575175 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17575175 = score(doc=562,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.014992419 = product of:
          0.029984837 = sum of:
            0.029984837 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.029984837 = score(doc=562,freq=2.0), product of:
                0.12916666 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036885478 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.71428573 = coord(5/7)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.39
    0.3905595 = product of:
      0.6834791 = sum of:
        0.06834791 = product of:
          0.20504372 = sum of:
            0.20504372 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.20504372 = score(doc=306,freq=2.0), product of:
                0.3127155 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036885478 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.20504372 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20504372 = score(doc=306,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.20504372 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20504372 = score(doc=306,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.20504372 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20504372 = score(doc=306,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.5714286 = coord(4/7)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  4. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.33
    0.33476526 = product of:
      0.58583915 = sum of:
        0.058583915 = product of:
          0.17575175 = sum of:
            0.17575175 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.17575175 = score(doc=2918,freq=2.0), product of:
                0.3127155 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036885478 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.17575175 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.17575175 = score(doc=2918,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.17575175 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.17575175 = score(doc=2918,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.17575175 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.17575175 = score(doc=2918,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.5714286 = coord(4/7)
    
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  5. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.28
    0.27897108 = product of:
      0.48819935 = sum of:
        0.048819937 = product of:
          0.1464598 = sum of:
            0.1464598 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.1464598 = score(doc=5895,freq=2.0), product of:
                0.3127155 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036885478 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.1464598 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.1464598 = score(doc=5895,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.1464598 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.1464598 = score(doc=5895,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.1464598 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.1464598 = score(doc=5895,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.5714286 = coord(4/7)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  6. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.22
    0.22317685 = product of:
      0.39055946 = sum of:
        0.039055947 = product of:
          0.11716784 = sum of:
            0.11716784 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.11716784 = score(doc=701,freq=2.0), product of:
                0.3127155 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036885478 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.11716784 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.11716784 = score(doc=701,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.11716784 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.11716784 = score(doc=701,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.11716784 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.11716784 = score(doc=701,freq=2.0), product of:
            0.3127155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.036885478 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5714286 = coord(4/7)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  7. Popper, K.R.: ¬Das offene Universum : ein Argument für den Indeterminismus (2001) 0.09
    0.08959527 = product of:
      0.31358343 = sum of:
        0.24672385 = weight(_text_:quantenphysiker in 4501) [ClassicSimilarity], result of:
          0.24672385 = score(doc=4501,freq=2.0), product of:
            0.40587822 = queryWeight, product of:
              11.00374 = idf(docFreq=1, maxDocs=44218)
              0.036885478 = queryNorm
            0.60787654 = fieldWeight in 4501, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              11.00374 = idf(docFreq=1, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4501)
        0.06685958 = weight(_text_:interpretation in 4501) [ClassicSimilarity], result of:
          0.06685958 = score(doc=4501,freq=2.0), product of:
            0.21128663 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.036885478 = queryNorm
            0.3164402 = fieldWeight in 4501, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4501)
      0.2857143 = coord(2/7)
    
    Abstract
    "In Das offene Universum: Ein Argument für den Indeterminismus präsentiert Popper eine Kritik des 'wissenschaftlichen' und des metaphysischen Determinismus und argumentiert, daß die klassische Physik den Determinismus genauso wenig voraussetzt oder impliziert wie die Quantenphysik. Dennoch stellt er fest, daß der metaphysische Determinismus den Werken vieler zeitgenössischer Quantenphysiker immer noch zugrundeliegt, inklusive den Werken von Gegnern des Determinismus. Popper verfolgt die Rollen, die die subjektive Interpretation der Wahrscheinlichkeit in der Physik immer noch spielt, bis hin zu diesen metaphysischen deterministischen Voraussetzungen [.]. Der [.] Band tritt in seiner Abhandlung über den Determinismus für die Ansicht ein, daß unsere Rationalität, was die Voraussage des zukünftigen Wachstums des menschlichen Wissens anbelangt, begrenzt ist. Wenn es keine solche Grenze gäbe, wären ernsthafte Argumente sinnlos: und ihr Auftreten wäre eine Illusion.Popper argumentiert also, daß die menschliche Rationalität, was Kritik anbelangt, unbegrenzt ist, jedoch begrenzt, was ihre Voraussagekraft anbelangt; und er zeigt, daß die Unbegrenztheit und die Begrenztheit jede in ihrem Gebiet notwendig sind, damit es die menschliche Rationalität überhaupt geben kann." (Der Herausgeber im Nachwort).
  8. Aalberg, T.; Haugen, F.B.; Husby, O.: ¬A Tool for Converting from MARC to FRBR (2006) 0.04
    0.04281897 = product of:
      0.14986639 = sum of:
        0.13237523 = weight(_text_:interpretation in 2425) [ClassicSimilarity], result of:
          0.13237523 = score(doc=2425,freq=4.0), product of:
            0.21128663 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.036885478 = queryNorm
            0.6265196 = fieldWeight in 2425, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2425)
        0.017491156 = product of:
          0.034982312 = sum of:
            0.034982312 = weight(_text_:22 in 2425) [ClassicSimilarity], result of:
              0.034982312 = score(doc=2425,freq=2.0), product of:
                0.12916666 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036885478 = queryNorm
                0.2708308 = fieldWeight in 2425, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2425)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The FRBR model is by many considered to be an important contribution to the next generation of bibliographic catalogues, but a major challenge for the library community is how to use this model on already existing MARC-based bibliographic catalogues. This problem requires a solution for the interpretation and conversion of MARC records, and a tool for this kind of conversion is developed as a part of the Norwegian BIBSYS FRBR project. The tool is based on a systematic approach to the interpretation and conversion process and is designed to be adaptable to the rules applied in different catalogues.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  9. Levinson, R.: Symmetry and the computation of conceptual structures (2000) 0.03
    0.031741306 = product of:
      0.11109457 = sum of:
        0.09360342 = weight(_text_:interpretation in 5081) [ClassicSimilarity], result of:
          0.09360342 = score(doc=5081,freq=2.0), product of:
            0.21128663 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.036885478 = queryNorm
            0.4430163 = fieldWeight in 5081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5081)
        0.017491156 = product of:
          0.034982312 = sum of:
            0.034982312 = weight(_text_:22 in 5081) [ClassicSimilarity], result of:
              0.034982312 = score(doc=5081,freq=2.0), product of:
                0.12916666 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036885478 = queryNorm
                0.2708308 = fieldWeight in 5081, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5081)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The discovery and exploitation of symmetry plays a major role in sciences such as crystallography, quantum theory, condensedmatter physics, thermodynamics, chemistry, biology and others. It then should not be surprising then, since Conceptual Structures are proposed as a universal knowledge representation scheme, that symmetry should play a role in their interpretation and their application. In this tutorial style paper, we illustrate the role of symmetry in Conceptual Structures and how algorithms may be constructed that exploit this symmetry in order to achieve computational efficiency
    Date
    3. 9.2000 19:22:45
  10. Weizenbaum, J.: Wir gegen die Gier (2008) 0.03
    0.028680608 = product of:
      0.10038212 = sum of:
        0.056732237 = weight(_text_:interpretation in 6983) [ClassicSimilarity], result of:
          0.056732237 = score(doc=6983,freq=4.0), product of:
            0.21128663 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.036885478 = queryNorm
            0.2685084 = fieldWeight in 6983, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6983)
        0.04364988 = sum of:
          0.028657459 = weight(_text_:anwendung in 6983) [ClassicSimilarity], result of:
            0.028657459 = score(doc=6983,freq=2.0), product of:
              0.17858024 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.036885478 = queryNorm
              0.16047385 = fieldWeight in 6983, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.0234375 = fieldNorm(doc=6983)
          0.014992419 = weight(_text_:22 in 6983) [ClassicSimilarity], result of:
            0.014992419 = score(doc=6983,freq=2.0), product of:
              0.12916666 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036885478 = queryNorm
              0.116070345 = fieldWeight in 6983, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=6983)
      0.2857143 = coord(2/7)
    
    Content
    Als Beispiel der Anwendung von Metaphern in den Naturwissenschaften fällt mir dieses ein: Ein schwarzes Loch ist ein Stern, dessen Anziehungskraft so stark ist, dass keine Information entfliehen kann. Aber buchstäblich ist so ein Stern nicht "schwarz", noch ist er ein "Loch". Und Information, also elektromagnetische Teilchen, "entfliehen" den ordinären Sternen nicht. Mein Kollege Norbert Wiener schrieb einmal: "Information ist Information, nicht Materie oder Energie." Sie ist immer eine private Leistung, nämlich die der Interpretation, deren Ergebnis Wissen ist. Information hat, wie, zum Beispiel die Aufführung eines Tanzes, keine Permanenz; sie ist eben weder Materie noch Energie. Das Maß der Wahrheit des produzierten Wissens hängt von der Qualität der angewandten Interpretation ab. Wissen überlebt, nämlich indem es den denkenden Menschen buchstäblich informiert, also den Zustand seines Gehirns ändert. Claude Shannons Informationstheorie lehrt uns, dass die Bedeutung einer Nachricht von der Erwartung des Empfängers abhängt. Sie ist nicht messbar, denn Nachrichten sind pure Signale, die keine inhärente Bedeutung bergen. Enthält das New Yorker Telefonbuch Information? Nein! Es besteht aus Daten, nämlich aus Texten, die, um zu Information und Wissen zu werden, interpretiert werden müssen. Der Leser erwartet, dass gewisse Inhalte Namen, Adressen und Telefonnummern repräsentieren. Enthält dieses Telefonbuch die Information, dass viele Armenier nahe beieinander wohnen?
    Date
    16. 3.2008 12:22:08
  11. Morris, J.: Individual differences in the interpretation of text : implications for information science (2009) 0.03
    0.025629016 = product of:
      0.1794031 = sum of:
        0.1794031 = weight(_text_:interpretation in 3318) [ClassicSimilarity], result of:
          0.1794031 = score(doc=3318,freq=10.0), product of:
            0.21128663 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.036885478 = queryNorm
            0.84909815 = fieldWeight in 3318, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=3318)
      0.14285715 = coord(1/7)
    
    Abstract
    Many tasks in library and information science (e.g., indexing, abstracting, classification, and text analysis techniques such as discourse and content analysis) require text meaning interpretation, and, therefore, any individual differences in interpretation are relevant and should be considered, especially for applications in which these tasks are done automatically. This article investigates individual differences in the interpretation of one aspect of text meaning that is commonly used in such automatic applications: lexical cohesion and lexical semantic relations. Experiments with 26 participants indicate an approximately 40% difference in interpretation. In total, 79, 83, and 89 lexical chains (groups of semantically related words) were analyzed in 3 texts, respectively. A major implication of this result is the possibility of modeling individual differences for individual users. Further research is suggested for different types of texts and readers than those used here, as well as similar research for different aspects of text meaning.
  12. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.02
    0.02494279 = product of:
      0.17459951 = sum of:
        0.17459951 = sum of:
          0.114629835 = weight(_text_:anwendung in 3895) [ClassicSimilarity], result of:
            0.114629835 = score(doc=3895,freq=2.0), product of:
              0.17858024 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.036885478 = queryNorm
              0.6418954 = fieldWeight in 3895, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.09375 = fieldNorm(doc=3895)
          0.059969675 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
            0.059969675 = score(doc=3895,freq=2.0), product of:
              0.12916666 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036885478 = queryNorm
              0.46428138 = fieldWeight in 3895, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=3895)
      0.14285715 = coord(1/7)
    
    Date
    24. 8.2005 19:20:22
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  13. Frâncu, V.: ¬An interpretation of the FRBR model (2004) 0.02
    0.024467982 = product of:
      0.085637935 = sum of:
        0.07564299 = weight(_text_:interpretation in 2647) [ClassicSimilarity], result of:
          0.07564299 = score(doc=2647,freq=4.0), product of:
            0.21128663 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.036885478 = queryNorm
            0.35801122 = fieldWeight in 2647, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=2647)
        0.009994946 = product of:
          0.019989893 = sum of:
            0.019989893 = weight(_text_:22 in 2647) [ClassicSimilarity], result of:
              0.019989893 = score(doc=2647,freq=2.0), product of:
                0.12916666 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036885478 = queryNorm
                0.15476047 = fieldWeight in 2647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2647)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Despite the existence of a logical structural model for bibliographic records which integrates any record type, library catalogues persist in offering catalogue records at the level of 'items'. Such records however, do not clearly indicate which works they contain. Hence the search possibilities of the end user are unduly limited. The Functional Requirements for Bibliographic Records (FRBR) present through a conceptual model, independent of any cataloguing code or implementation, a globalized view of the bibliographic universe. This model, a synthesis of the existing cataloguing rules, consists of clearly structured entities and well defined types of relationships among them. From a theoretical viewpoint, the model is likely to be a good knowledge organiser with great potential in identifying the author and the work represented by an item or publication and is able to link different works of the author with different editions, translations or adaptations of those works aiming at better answering the user needs. This paper is presenting an interpretation of the FRBR model opposing it to a traditional bibliographic record of a complex library material.
    Date
    17. 6.2015 14:40:22
  14. Johnson, E.H.: Objects for distributed heterogeneous information retrieval (2000) 0.02
    0.022672363 = product of:
      0.079353265 = sum of:
        0.06685958 = weight(_text_:interpretation in 6959) [ClassicSimilarity], result of:
          0.06685958 = score(doc=6959,freq=2.0), product of:
            0.21128663 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.036885478 = queryNorm
            0.3164402 = fieldWeight in 6959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.012493683 = product of:
          0.024987366 = sum of:
            0.024987366 = weight(_text_:22 in 6959) [ClassicSimilarity], result of:
              0.024987366 = score(doc=6959,freq=2.0), product of:
                0.12916666 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036885478 = queryNorm
                0.19345059 = fieldWeight in 6959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6959)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The success of the World Wide Web Shows that we can access, search, and retrieve information from globally distributed databases. lf a database, such as a library catalog, has some sort of Web-based front end, we can type its URL into a Web browser and use its HTML-based forms to search for items in that database. Depending an how well the query conforms to the database content, how the search engine interprets the query, and how the server formats the results into HTML, we might actually find something usable. While the first two issues depend an ourselves and the server, an the Web the latter falls to the mercy of HTML, which we all know as a great destroyer of information because it codes for display but not for content description. When looking at an HTML-formatted display, we must depend an our own interpretation to recognize such entities as author names, titles, and subject identifiers. The Web browser can do nothing but display the information. lf we want some other view of the result, such as sorting the records by date (provided it offers such an option to begin with), the server must do it. This makes poor use of the computing power we have at the desktop (or even laptop), which, unless it involves retrieving more records, could easily do the result Set manipulation that we currently send back to the server. Despite having personal computers wich immense computational power, as far as information retrieval goes, we still essentially use them as dumb terminals.
    Date
    22. 9.1997 19:16:05
  15. Bartlett, J.C.; Toms, E.G.: Developing a protocol for bioinformatics analysis : an integrated information behavior and task analysis approach (2005) 0.02
    0.022672363 = product of:
      0.079353265 = sum of:
        0.06685958 = weight(_text_:interpretation in 5256) [ClassicSimilarity], result of:
          0.06685958 = score(doc=5256,freq=2.0), product of:
            0.21128663 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.036885478 = queryNorm
            0.3164402 = fieldWeight in 5256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5256)
        0.012493683 = product of:
          0.024987366 = sum of:
            0.024987366 = weight(_text_:22 in 5256) [ClassicSimilarity], result of:
              0.024987366 = score(doc=5256,freq=2.0), product of:
                0.12916666 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036885478 = queryNorm
                0.19345059 = fieldWeight in 5256, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5256)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The purpose of this research is to capture, understand, and model the process used by bioinformatics analysts when facing a specific scientific problem. Integrating information behavior with task analysis, we interviewed 20 bioinformatics experts about the process they follow to conduct a typical bioinformatics analysis - a functional analysis of a gene, and then used a task analysis approach to model that process. We found that each expert followed a unique process in using bioinformatics resources, but had significant similarities with their peers. We synthesized these unique processes into a standard research protocol, from which we developed a procedural model that describes the process of conducting a functional analysis of a gene. The model protocol consists of a series of 16 individual steps, each of which specifies detail for the type of analysis, how and why it is conducted, the tools used, the data input and output, and the interpretation of the results. The linking of information behavior and task analysis research is a novel approach, as it provides a rich high-level view of information behavior while providing a detailed analysis at the task level. In this article we concentrate on the latter.
    Date
    22. 7.2006 14:28:55
  16. Franken, G.: Weglassen öffnet den Weg zur Welt : BuchMalerei und Wortarchitektur von Elisabeth Jansen in Küchenhof-Remise (2004) 0.02
    0.022672363 = product of:
      0.079353265 = sum of:
        0.06685958 = weight(_text_:interpretation in 1820) [ClassicSimilarity], result of:
          0.06685958 = score(doc=1820,freq=2.0), product of:
            0.21128663 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.036885478 = queryNorm
            0.3164402 = fieldWeight in 1820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1820)
        0.012493683 = product of:
          0.024987366 = sum of:
            0.024987366 = weight(_text_:22 in 1820) [ClassicSimilarity], result of:
              0.024987366 = score(doc=1820,freq=2.0), product of:
                0.12916666 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036885478 = queryNorm
                0.19345059 = fieldWeight in 1820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1820)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Content
    "Die Welt des Informationszeitalters dringt auf den Menschen immer heftiger und geballter ein - und zugleich immer verschlüsselter, codierter, gefiltert und vermittelt als Medienbotschaft. Die Auseinandersetzung mit diesen Zeichen, ob in visualisierender Form oder textlich gefasst, hat die Hebborner Künstlerin Elisabeth Jansen seit Jahren in den Mittelpunkt ihrer meist kleinformatigen Arbeiten gestellt: Dem Informationsüberfluss, der sich in seiner lautstarken Überlagerung zum Informationsrauschen entwickelt, begegnet sie mit einem energischen und impulgesteuerten Auswahl- und Reduzierungsschritt, wobei Collage und farbliche Abdeckung die bevorzugten Instrumente der Datenzähmung darstellen. In der Remise des Altenberger Küchenhofes sind Ergebnisse ihrer enorm fruchtbaren Produktion aus den letzten Jahren unter dem Titel "BuchMalerei und Wortarchitektur" bis zum 13. Juni zu besichtigen. Ausgangspunkt ihrer Werke ist oft ein irgendwie gestaltetes Papier: ein Kalenderblatt, ein Prospekt, eine Kunstpost karte, die unter dem Lackstift ihre Physiognomie verliert. Die Konturen der Auslassungen, die dem Untergrund erlauben, noch partiell in Erscheinung zu treten, zwingen dem Träger eine völlig neue Interpretation auf. Es bilden sich Umrisse und Gestalten von fragmentarisierter, torsohafter Form, die nur in Ausnahmefällen vorgefundene Elemente als formalisierte Struktur aufgreifen und einbinden, durch zeichnerische Figuren ergänzt. Welt erhält neue Bedeutung, wird auf dem Papier - eigentlich im Kopf - neu konstruiert. Besondere Dichte erhält diese Neuformulierung in den Künstlerbüchern; in denen Elisabeth Jansen über einen längeren Zeitraum tagebuchoder albumartig einen solchen Transformationsprozess fortführt. Diese annalistisch fortlaufenden Kommentare finden ihre Zuspitzung in den Wortarchitekturen", in denen einzelne Worte oder Zeilenbruchstücke aus der Tageslektüre ausgewählt und zu neuen Textkörpern addiert werden: Welt wird zu einem Kaleidoskop von Eindrücken, die sich zufälligen Standpunkten verdanken. Was bleibt, entscheidet das Subjekt selbst."
    Date
    3. 5.1997 8:44:22
  17. Procházka, D.: ¬The development of uniform titles for choreographic works (2006) 0.02
    0.021612283 = product of:
      0.15128598 = sum of:
        0.15128598 = weight(_text_:interpretation in 223) [ClassicSimilarity], result of:
          0.15128598 = score(doc=223,freq=4.0), product of:
            0.21128663 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.036885478 = queryNorm
            0.71602243 = fieldWeight in 223, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0625 = fieldNorm(doc=223)
      0.14285715 = coord(1/7)
    
    Abstract
    In 1994, the Library of Congress issued a rule interpretation to AACR2 detailing how uniform titles for choreographic works should be established. The value of the rule interpretation is discussed, and it is contrasted with prior practices. The origins of the concept behind the rule are traced back to the New York Public Library in the mid twentieth century, and its evolution into the current guidelines is delineated.
  18. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.020785656 = product of:
      0.14549959 = sum of:
        0.14549959 = sum of:
          0.09552486 = weight(_text_:anwendung in 539) [ClassicSimilarity], result of:
            0.09552486 = score(doc=539,freq=2.0), product of:
              0.17858024 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.036885478 = queryNorm
              0.5349128 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
          0.049974732 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
            0.049974732 = score(doc=539,freq=2.0), product of:
              0.12916666 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036885478 = queryNorm
              0.38690117 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
      0.14285715 = coord(1/7)
    
    Date
    26.12.2011 13:22:07
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  19. Rindflesch, T.C.; Fizsman, M.: The interaction of domain knowledge and linguistic structure in natural language processing : interpreting hypernymic propositions in biomedical text (2003) 0.02
    0.019102737 = product of:
      0.13371916 = sum of:
        0.13371916 = weight(_text_:interpretation in 2097) [ClassicSimilarity], result of:
          0.13371916 = score(doc=2097,freq=8.0), product of:
            0.21128663 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.036885478 = queryNorm
            0.6328804 = fieldWeight in 2097, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2097)
      0.14285715 = coord(1/7)
    
    Abstract
    Interpretation of semantic propositions in free-text documents such as MEDLINE citations would provide valuable support for biomedical applications, and several approaches to semantic interpretation are being pursued in the biomedical informatics community. In this paper, we describe a methodology for interpreting linguistic structures that encode hypernymic propositions, in which a more specific concept is in a taxonomic relationship with a more general concept. In order to effectively process these constructions, we exploit underspecified syntactic analysis and structured domain knowledge from the Unified Medical Language System (UMLS). After introducing the syntactic processing on which our system depends, we focus on the UMLS knowledge that supports interpretation of hypernymic propositions. We first use semantic groups from the Semantic Network to ensure that the two concepts involved are compatible; hierarchical information in the Metathesaurus then determines which concept is more general and which more specific. A preliminary evaluation of a sample based on the semantic group Chemicals and Drugs provides 83% precision. An error analysis was conducted and potential solutions to the problems encountered are presented. The research discussed here serves as a paradigm for investigating the interaction between domain knowledge and linguistic structure in natural language processing, and could also make a contribution to research on automatic processing of discourse structure. Additional implications of the system we present include its integration in advanced semantic interpretation processors for biomedical text and its use for information extraction in specific domains. The approach has the potential to support a range of applications, including information retrieval and ontology engineering.
  20. Ohly, H.P.: Erstellung und Interpretation von semantischen Karten am Beispiel des Themas 'Soziologische Beratung' (2004) 0.02
    0.018910747 = product of:
      0.13237523 = sum of:
        0.13237523 = weight(_text_:interpretation in 3176) [ClassicSimilarity], result of:
          0.13237523 = score(doc=3176,freq=4.0), product of:
            0.21128663 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.036885478 = queryNorm
            0.6265196 = fieldWeight in 3176, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3176)
      0.14285715 = coord(1/7)
    
    Abstract
    Bei der Analyse von Informationsströmen und -systemen mit statistischen Methoden werden die Ergebnisse gerne in Grafiken dargestellt, da diese intuitiv schneller zu erfassen sind und auch Laien ohne tiefere statistische Vorbildung eine Anschauung bekommen können. Klassisches Beispiel ist etwa die graphische Darstellung der Verluste des napoleonischen Heeres in Russland (Abb. 1). Unbeachtet bleibt dabei oft, dass trotz Einfachheit der Darstellung meist große Mengen von Daten herangezogen werden und diese dann lediglich nach wenigen Gesichtspunkten in eine Grafik projiziert werdens, was leicht auch zu Fehleinschätzungen führen kann. Es sind darum geeignete Verfahren auszuwählen, die eine adäquate und möglichst 'objektive' Interpretation ermöglichen.

Languages

Types

  • a 1402
  • m 225
  • el 91
  • s 75
  • x 44
  • b 27
  • i 9
  • r 7
  • n 5
  • More… Less…

Themes

Subjects

Classifications