Search (44 results, page 1 of 3)

  • × theme_ss:"Retrievalstudien"
  1. Park, T.K.: ¬The nature of relevance in information retrieval : an empirical study (1993) 0.03
    0.025564162 = product of:
      0.17894913 = sum of:
        0.17894913 = weight(_text_:interpretations in 5336) [ClassicSimilarity], result of:
          0.17894913 = score(doc=5336,freq=4.0), product of:
            0.26682967 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03730009 = queryNorm
            0.6706493 = fieldWeight in 5336, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.046875 = fieldNorm(doc=5336)
      0.14285715 = coord(1/7)
    
    Abstract
    Experimental research in information retrieval (IR) depends on the idea of relevance. Because of its key role in IR, recent questions about relevance have raised issues of methododlogical concern and have shaken the philosophical foundations of IR theory development. Despite an existing set of theoretical definitions of this concept, our understanding of relevance from users' perspectives is still limited. Using naturalistic inquiry methodology, this article reports an emprical study of user-based relevance interpretations. A model is presented that reflects the nature of the thought process of users who are evaluating bibliographic citations produced by a document retrieval system. Three major categories of variables affecting relevance assessments - internal context, external context, and problem context - are idetified and described. Users' relevance assessments involve multiple layers of interpretations that are derived from individuals' experiences, perceptions, and private knowledge related to the particular information problems at hand
  2. Biebricher, P.; Fuhr, N.; Niewelt, B.: ¬Der AIR-Retrievaltest (1986) 0.01
    0.0068999017 = product of:
      0.04829931 = sum of:
        0.04829931 = product of:
          0.09659862 = sum of:
            0.09659862 = weight(_text_:anwendung in 4040) [ClassicSimilarity], result of:
              0.09659862 = score(doc=4040,freq=2.0), product of:
                0.18058759 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.03730009 = queryNorm
                0.5349128 = fieldWeight in 4040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4040)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Automatische Indexierung zwischen Forschung und Anwendung, Hrsg.: G. Lustig
  3. Frisch, E.; Kluck, M.: Pretest zum Projekt German Indexing and Retrieval Testdatabase (GIRT) unter Anwendung der Retrievalsysteme Messenger und freeWAISsf (1997) 0.01
    0.0055199214 = product of:
      0.03863945 = sum of:
        0.03863945 = product of:
          0.0772789 = sum of:
            0.0772789 = weight(_text_:anwendung in 624) [ClassicSimilarity], result of:
              0.0772789 = score(doc=624,freq=2.0), product of:
                0.18058759 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.03730009 = queryNorm
                0.42793027 = fieldWeight in 624, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=624)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  4. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.01
    0.005053647 = product of:
      0.035375528 = sum of:
        0.035375528 = product of:
          0.070751056 = sum of:
            0.070751056 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.070751056 = score(doc=262,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    20.10.2000 12:22:23
  5. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.01
    0.005053647 = product of:
      0.035375528 = sum of:
        0.035375528 = product of:
          0.070751056 = sum of:
            0.070751056 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.070751056 = score(doc=6418,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Online. 22(1998) no.6, S.57-58
  6. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.01
    0.005053647 = product of:
      0.035375528 = sum of:
        0.035375528 = product of:
          0.070751056 = sum of:
            0.070751056 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.070751056 = score(doc=6438,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    11. 8.2001 16:22:19
  7. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.01
    0.005053647 = product of:
      0.035375528 = sum of:
        0.035375528 = product of:
          0.070751056 = sum of:
            0.070751056 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.070751056 = score(doc=5089,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2006 18:43:54
  8. Scherer, B.: Automatische Indexierung und ihre Anwendung im DFG-Projekt "Gemeinsames Portal für Bibliotheken, Archive und Museen (BAM)" (2003) 0.00
    0.004878967 = product of:
      0.03415277 = sum of:
        0.03415277 = product of:
          0.06830554 = sum of:
            0.06830554 = weight(_text_:anwendung in 4283) [ClassicSimilarity], result of:
              0.06830554 = score(doc=4283,freq=4.0), product of:
                0.18058759 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.03730009 = queryNorm
                0.3782405 = fieldWeight in 4283, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4283)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Automatische Indexierung verzeichnet schon seit einigen Jahren aufgrund steigender Informationsflut ein wachsendes Interesse. Allerdings gibt es immer noch Vorbehalte gegenüber der intellektuellen Indexierung in Bezug auf Qualität und größerem Aufwand der Systemimplementierung bzw. -pflege. Neuere Entwicklungen aus dem Bereich des Wissensmanagements, wie beispielsweise Verfahren aus der Künstlichen Intelligenz, der Informationsextraktion, dem Text Mining bzw. der automatischen Klassifikation sollen die automatische Indexierung aufwerten und verbessern. Damit soll eine intelligentere und mehr inhaltsbasierte Erschließung geleistet werden. In dieser Masterarbeit wird außerhalb der Darstellung von Grundlagen und Verfahren der automatischen Indexierung sowie neueren Entwicklungen auch Möglichkeiten der Evaluation dargestellt. Die mögliche Anwendung der automatischen Indexierung im DFG-ProjektGemeinsames Portal für Bibliotheken, Archive und Museen (BAM)" bilden den Schwerpunkt der Arbeit. Im Portal steht die bibliothekarische Erschließung von Texten im Vordergrund. In einem umfangreichen Test werden drei deutsche, linguistische Systeme mit statistischen Verfahren kombiniert (die aber teilweise im System bereits integriert ist) und evaluiert, allerdings nur auf der Basis der ausgegebenen Indexate. Abschließend kann festgestellt werden, dass die Ergebnisse und damit die Qualität (bezogen auf die Indexate) von intellektueller und automatischer Indexierung noch signifikant unterschiedlich sind. Die Gründe liegen in noch zu lösenden semantischen Problemen bzw, in der Obereinstimmung mit Worten aus einem Thesaurus, die von einem automatischen Indexierungssystem nicht immer nachvollzogen werden kann. Eine Inhaltsanreicherung mit den Indexaten zum Vorteil beim Retrieval kann, je nach System oder auch über die Einbindung durch einen Thesaurus, erreicht werden.
  9. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.00
    0.004139941 = product of:
      0.028979586 = sum of:
        0.028979586 = product of:
          0.057959173 = sum of:
            0.057959173 = weight(_text_:anwendung in 5202) [ClassicSimilarity], result of:
              0.057959173 = score(doc=5202,freq=2.0), product of:
                0.18058759 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.03730009 = queryNorm
                0.3209477 = fieldWeight in 5202, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5202)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  10. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.00
    0.003609748 = product of:
      0.025268236 = sum of:
        0.025268236 = product of:
          0.050536472 = sum of:
            0.050536472 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.050536472 = score(doc=3103,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    27. 2.1999 20:55:22
  11. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.00
    0.003609748 = product of:
      0.025268236 = sum of:
        0.025268236 = product of:
          0.050536472 = sum of:
            0.050536472 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.050536472 = score(doc=3107,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    27. 2.1999 20:59:22
  12. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.00
    0.003609748 = product of:
      0.025268236 = sum of:
        0.025268236 = product of:
          0.050536472 = sum of:
            0.050536472 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.050536472 = score(doc=2417,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Pages
    S.22-25
  13. Sünkler, S.: Prototypische Entwicklung einer Software für die Erfassung und Analyse explorativer Suchen in Verbindung mit Tests zur Retrievaleffektivität (2012) 0.00
    0.0034499508 = product of:
      0.024149654 = sum of:
        0.024149654 = product of:
          0.04829931 = sum of:
            0.04829931 = weight(_text_:anwendung in 479) [ClassicSimilarity], result of:
              0.04829931 = score(doc=479,freq=2.0), product of:
                0.18058759 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.03730009 = queryNorm
                0.2674564 = fieldWeight in 479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=479)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Gegenstand dieser Arbeit ist die Entwicklung eines funktionalen Prototyps einer Webanwendung für die Verknüpfung der Evaluierung von explorativen Suchen in Verbindung mit der Durchführung klassisches Retrievaltests. Als Grundlage für die Programmierung des Prototyps werden benutzerorientierte und systemorientierte Evalulierungsmethoden für Suchmaschinen analysiert und in einem theoretischen Modell zur Untersuchung von Informationssysteme und Suchmaschinen kombiniert. Bei der Gestaltung des Modells und des Prototyps wird gezeigt, wie sich aufgezeichnete Aktionsdaten praktisch für die Suchmaschinenevaluierung verwenden lassen, um auf der einen Seite eine Datengrundlage für Retrievaltests zu gewinnen und andererseits, um für die Auswertung von Relevanzbewertungen auch das implizierte Feedback durch Handlungen der Anwender zu berücksichtigen. Retrievaltests sind das gängige und erprobte Mittel zur Messung der Retrievaleffektiviät von Informationssystemen und Suchmaschinen, verzichten aber auf eine Berücksichtigung des tatsächlichen Nutzerverhaltens. Eine Methode für die Erfassung der Interaktionen von Suchmaschinennutzern sind protokollbasierte Tests, mit denen sich Logdateien über Benutzer einer Anwendung generieren lassen. Die im Rahmen der Arbeit umgesetzte Software bietet einen Ansatz, Retrievaltests auf Basis protokollierter Nutzerdaten in Verbindung mit kontrollierten Suchaufgaben, durchzuführen. Das Ergebnis dieser Arbeit ist ein fertiger funktionaler Prototyp, der in seinem Umfang bereits innerhalb von Suchmaschinenstudien nutzbar ist.
  14. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.00
    0.0028877987 = product of:
      0.02021459 = sum of:
        0.02021459 = product of:
          0.04042918 = sum of:
            0.04042918 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.04042918 = score(doc=5002,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    19. 3.1996 11:22:12
  15. Sanderson, M.: ¬The Reuters test collection (1996) 0.00
    0.0028877987 = product of:
      0.02021459 = sum of:
        0.02021459 = product of:
          0.04042918 = sum of:
            0.04042918 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.04042918 = score(doc=6971,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  16. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.00
    0.0028877987 = product of:
      0.02021459 = sum of:
        0.02021459 = product of:
          0.04042918 = sum of:
            0.04042918 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.04042918 = score(doc=744,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    1. 8.1996 22:01:00
  17. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.00
    0.0028877987 = product of:
      0.02021459 = sum of:
        0.02021459 = product of:
          0.04042918 = sum of:
            0.04042918 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.04042918 = score(doc=3087,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  18. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.00
    0.0028877987 = product of:
      0.02021459 = sum of:
        0.02021459 = product of:
          0.04042918 = sum of:
            0.04042918 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
              0.04042918 = score(doc=3572,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.30952093 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Online. 22(1998) no.3, S.24-26,28
  19. Dresel, R.; Hörnig, D.; Kaluza, H.; Peter, A.; Roßmann, A.; Sieber, W.: Evaluation deutscher Web-Suchwerkzeuge : Ein vergleichender Retrievaltest (2001) 0.00
    0.0028877987 = product of:
      0.02021459 = sum of:
        0.02021459 = product of:
          0.04042918 = sum of:
            0.04042918 = weight(_text_:22 in 261) [ClassicSimilarity], result of:
              0.04042918 = score(doc=261,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.30952093 = fieldWeight in 261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=261)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Die deutschen Suchmaschinen, Abacho, Acoon, Fireball und Lycos sowie die Web-Kataloge Web.de und Yahoo! werden einem Qualitätstest nach relativem Recall, Precision und Availability unterzogen. Die Methoden der Retrievaltests werden vorgestellt. Im Durchschnitt werden bei einem Cut-Off-Wert von 25 ein Recall von rund 22%, eine Precision von knapp 19% und eine Verfügbarkeit von 24% erreicht
  20. Ellis, D.: Progress and problems in information retrieval (1996) 0.00
    0.0028877987 = product of:
      0.02021459 = sum of:
        0.02021459 = product of:
          0.04042918 = sum of:
            0.04042918 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
              0.04042918 = score(doc=789,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.30952093 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    26. 7.2002 20:22:46

Languages

  • e 33
  • d 8
  • f 1
  • m 1
  • More… Less…

Types

  • a 35
  • s 4
  • m 3
  • r 2
  • x 2
  • el 1
  • More… Less…