Search (667 results, page 1 of 34)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.08
    0.08489122 = product of:
      0.3395649 = sum of:
        0.08489122 = product of:
          0.25467366 = sum of:
            0.25467366 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.25467366 = score(doc=1826,freq=2.0), product of:
                0.27188486 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032069415 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.25467366 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.25467366 = score(doc=1826,freq=2.0), product of:
            0.27188486 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032069415 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.25 = coord(2/8)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Strobel, S.: ¬The complete Linux kit : fully configured LINUX system kernel (1997) 0.07
    0.06980894 = product of:
      0.18615717 = sum of:
        0.021088472 = product of:
          0.042176943 = sum of:
            0.042176943 = weight(_text_:system in 8959) [ClassicSimilarity], result of:
              0.042176943 = score(doc=8959,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.41757566 = fieldWeight in 8959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.09375 = fieldNorm(doc=8959)
          0.5 = coord(1/2)
        0.13899893 = product of:
          0.27799785 = sum of:
            0.27799785 = weight(_text_:handbooks in 8959) [ClassicSimilarity], result of:
              0.27799785 = score(doc=8959,freq=2.0), product of:
                0.2593123 = queryWeight, product of:
                  8.085969 = idf(docFreq=36, maxDocs=44218)
                  0.032069415 = queryNorm
                1.0720581 = fieldWeight in 8959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.085969 = idf(docFreq=36, maxDocs=44218)
                  0.09375 = fieldNorm(doc=8959)
          0.5 = coord(1/2)
        0.026069777 = product of:
          0.052139554 = sum of:
            0.052139554 = weight(_text_:22 in 8959) [ClassicSimilarity], result of:
              0.052139554 = score(doc=8959,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.46428138 = fieldWeight in 8959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=8959)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Date
    16. 7.2002 20:22:55
    Pages
    3 CD-ROMs + 2 handbooks
  3. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.07
    0.06791298 = product of:
      0.27165192 = sum of:
        0.06791298 = product of:
          0.20373893 = sum of:
            0.20373893 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.20373893 = score(doc=230,freq=2.0), product of:
                0.27188486 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032069415 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.20373893 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.20373893 = score(doc=230,freq=2.0), product of:
            0.27188486 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032069415 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.25 = coord(2/8)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  4. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.04
    0.04244561 = product of:
      0.16978244 = sum of:
        0.04244561 = product of:
          0.12733683 = sum of:
            0.12733683 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.12733683 = score(doc=4388,freq=2.0), product of:
                0.27188486 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032069415 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.12733683 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.12733683 = score(doc=4388,freq=2.0), product of:
            0.27188486 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.032069415 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.25 = coord(2/8)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  5. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2005) 0.03
    0.03381897 = product of:
      0.13527589 = sum of:
        0.032094855 = weight(_text_:retrieval in 4324) [ClassicSimilarity], result of:
          0.032094855 = score(doc=4324,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33085006 = fieldWeight in 4324, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4324)
        0.10318103 = sum of:
          0.07276629 = weight(_text_:etc in 4324) [ClassicSimilarity], result of:
            0.07276629 = score(doc=4324,freq=2.0), product of:
              0.17370372 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.032069415 = queryNorm
              0.41891038 = fieldWeight in 4324, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
          0.03041474 = weight(_text_:22 in 4324) [ClassicSimilarity], result of:
            0.03041474 = score(doc=4324,freq=2.0), product of:
              0.112301625 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.032069415 = queryNorm
              0.2708308 = fieldWeight in 4324, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
      0.25 = coord(2/8)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:25
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  6. Open Knowledge Foundation: Prinzipien zu offenen bibliographischen Daten (2011) 0.03
    0.032927368 = product of:
      0.13170947 = sum of:
        0.010961009 = product of:
          0.021922018 = sum of:
            0.021922018 = weight(_text_:29 in 4399) [ClassicSimilarity], result of:
              0.021922018 = score(doc=4399,freq=2.0), product of:
                0.11281017 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.032069415 = queryNorm
                0.19432661 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4399)
          0.5 = coord(1/2)
        0.12074847 = sum of:
          0.09002494 = weight(_text_:etc in 4399) [ClassicSimilarity], result of:
            0.09002494 = score(doc=4399,freq=6.0), product of:
              0.17370372 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.032069415 = queryNorm
              0.5182672 = fieldWeight in 4399, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4399)
          0.030723527 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
            0.030723527 = score(doc=4399,freq=4.0), product of:
              0.112301625 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.032069415 = queryNorm
              0.27358043 = fieldWeight in 4399, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4399)
      0.25 = coord(2/8)
    
    Content
    "Bibliographische Daten Um den Geltungsbereich der Prinzipien festzulegen, wird in diesem ersten Teil der zugrundeliegende Begriff bibliographischer Daten erläutert. Kerndaten Bibliographische Daten bestehen aus bibliographischen Beschreibungen. Eine bibliographische Beschreibung beschreibt eine bibliographische Ressource (Artikel, Monographie etc. - ob gedruckt oder elektronisch) zum Zwecke 1. der Identifikation der beschriebenen Ressource, d.h. des Zeigens auf eine bestimmte Ressource in der Gesamtheit aller bibliographischer Ressourcen und 2. der Lokalisierung der beschriebenen Ressource, d.h. eines Hinweises, wo die beschriebene Ressource aufzufinden ist. Traditionellerweise erfüllte eine Beschreibung beide Zwecke gleichzeitig, indem sie Information lieferte über: Autor(en) und Herausgeber, Titel, Verlag, Veröffentlichungsdatum und -ort, Identifizierung des übergeordneten Werks (z.B. einer Zeitschrift), Seitenangaben. Im Web findet Identifikation statt mittels Uniform Resource Identifiers (URIs) wie z.B. URNs oder DOIs. Lokalisierung wird ermöglicht durch HTTP-URIs, die auch als Uniform Resource Locators (URLs) bezeichnet werden. Alle URIs für bibliographische Ressourcen fallen folglich unter den engen Begriff bibliographischer Daten. Sekundäre Daten Eine bibliographische Beschreibung kann andere Informationen enthalten, die unter den Begriff bibliographischer Daten fallen, beispielsweise Nicht-Web-Identifikatoren (ISBN, LCCN, OCLC etc.), Angaben zum Urheberrechtsstatus, administrative Daten und mehr; diese Daten können von Bibliotheken, Verlagen, Wissenschaftlern, Online-Communities für Buchliebhaber, sozialen Literaturverwaltungssystemen und Anderen produziert sein. Darüber hinaus produzieren Bibliotheken und verwandte Institutionen kontrollierte Vokabulare zum Zwecke der bibliographischen Beschreibung wie z. B. Personen- und Schlagwortnormdateien, Klassifikationen etc., die ebenfalls unter den Begriff bibliographischer Daten fallen."
    Date
    22. 3.2011 18:22:29
  7. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.03
    0.028090624 = product of:
      0.07490833 = sum of:
        0.032094855 = weight(_text_:retrieval in 3450) [ClassicSimilarity], result of:
          0.032094855 = score(doc=3450,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33085006 = fieldWeight in 3450, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3450)
        0.021307012 = product of:
          0.042614024 = sum of:
            0.042614024 = weight(_text_:system in 3450) [ClassicSimilarity], result of:
              0.042614024 = score(doc=3450,freq=6.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.42190298 = fieldWeight in 3450, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.5 = coord(1/2)
        0.02150647 = product of:
          0.04301294 = sum of:
            0.04301294 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.04301294 = score(doc=3450,freq=4.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    The design and architecture of MIaS (Math Indexer and Searcher), a system for mathematics retrieval is presented, and design decisions are discussed. We argue for an approach based on Presentation MathML using a similarity of math subformulae. The system was implemented as a math-aware search engine based on the state-ofthe-art system Apache Lucene. Scalability issues were checked against more than 400,000 arXiv documents with 158 million mathematical formulae. Almost three billion MathML subformulae were indexed using a Solr-compatible Lucene.
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  8. Oard, D.W.: Alternative approaches for cross-language text retrieval (1997) 0.03
    0.026564179 = product of:
      0.07083781 = sum of:
        0.043947693 = weight(_text_:retrieval in 1164) [ClassicSimilarity], result of:
          0.043947693 = score(doc=1164,freq=30.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.45303512 = fieldWeight in 1164, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1164)
        0.008698551 = product of:
          0.017397102 = sum of:
            0.017397102 = weight(_text_:system in 1164) [ClassicSimilarity], result of:
              0.017397102 = score(doc=1164,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.17224117 = fieldWeight in 1164, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1164)
          0.5 = coord(1/2)
        0.018191572 = product of:
          0.036383145 = sum of:
            0.036383145 = weight(_text_:etc in 1164) [ClassicSimilarity], result of:
              0.036383145 = score(doc=1164,freq=2.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.20945519 = fieldWeight in 1164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1164)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    The explosive growth of the Internet and other sources of networked information have made automatic mediation of access to networked information sources an increasingly important problem. Much of this information is expressed as electronic text, and it is becoming practical to automatically convert some printed documents and recorded speech to electronic text as well. Thus, automated systems capable of detecting useful documents are finding widespread application. With even a small number of languages it can be inconvenient to issue the same query repeatedly in every language, so users who are able to read more than one language will likely prefer a multilingual text retrieval system over a collection of monolingual systems. And since reading ability in a language does not always imply fluent writing ability in that language, such users will likely find cross-language text retrieval particularly useful for languages in which they are less confident of their ability to express their information needs effectively. The use of such systems can be also be beneficial if the user is able to read only a single language. For example, when only a small portion of the document collection will ever be examined by the user, performing retrieval before translation can be significantly more economical than performing translation before retrieval. So when the application is sufficiently important to justify the time and effort required for translation, those costs can be minimized if an effective cross-language text retrieval system is available. Even when translation is not available, there are circumstances in which cross-language text retrieval could be useful to a monolingual user. For example, a researcher might find a paper published in an unfamiliar language useful if that paper contains references to works by the same author that are in the researcher's native language.
    Multilingual text retrieval can be defined as selection of useful documents from collections that may contain several languages (English, French, Chinese, etc.). This formulation allows for the possibility that individual documents might contain more than one language, a common occurrence in some applications. Both cross-language and within-language retrieval are included in this formulation, but it is the cross-language aspect of the problem which distinguishes multilingual text retrieval from its well studied monolingual counterpart. At the SIGIR 96 workshop on "Cross-Linguistic Information Retrieval" the participants discussed the proliferation of terminology being used to describe the field and settled on "Cross-Language" as the best single description of the salient aspect of the problem. "Multilingual" was felt to be too broad, since that term has also been used to describe systems able to perform within-language retrieval in more than one language but that lack any cross-language capability. "Cross-lingual" and "cross-linguistic" were felt to be equally good descriptions of the field, but "crosslanguage" was selected as the preferred term in the interest of standardization. Unfortunately, at about the same time the U.S. Defense Advanced Research Projects Agency (DARPA) introduced "translingual" as their preferred term, so we are still some distance from reaching consensus on this matter.
    I will not attempt to draw a sharp distinction between retrieval and filtering in this survey. Although my own work on adaptive cross-language text filtering has led me to make this distinction fairly carefully in other presentations (c.f., (Oard 1997b)), such an proach does little to help understand the fundamental techniques which have been applied or the results that have been obtained in this case. Since it is still common to view filtering (detection of useful documents in dynamic document streams) as a kind of retrieval, will simply adopt that perspective here.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  9. Dolin, R.; Agrawal, D.; El Abbadi, A.; Pearlman, J.: Using automated classification for summarizing and selecting heterogeneous information sources (1998) 0.03
    0.025964873 = product of:
      0.06923966 = sum of:
        0.027509877 = weight(_text_:retrieval in 316) [ClassicSimilarity], result of:
          0.027509877 = score(doc=316,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.2835858 = fieldWeight in 316, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=316)
        0.010544236 = product of:
          0.021088472 = sum of:
            0.021088472 = weight(_text_:system in 316) [ClassicSimilarity], result of:
              0.021088472 = score(doc=316,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.20878783 = fieldWeight in 316, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=316)
          0.5 = coord(1/2)
        0.031185552 = product of:
          0.062371105 = sum of:
            0.062371105 = weight(_text_:etc in 316) [ClassicSimilarity], result of:
              0.062371105 = score(doc=316,freq=2.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.35906604 = fieldWeight in 316, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=316)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Information retrieval over the Internet increasingly requires the filtering of thousands of heterogeneous information sources. Important sources of information include not only traditional databases with structured data and queries, but also increasing numbers of non-traditional, semi- or unstructured collections such as Web sites, FTP archives, etc. As the number and variability of sources increases, new ways of automatically summarizing, discovering, and selecting collections relevant to a user's query are needed. One such method involves the use of classification schemes, such as the Library of Congress Classification (LCC) [10], within which a collection may be represented based on its content, irrespective of the structure of the actual data or documents. For such a system to be useful in a large-scale distributed environment, it must be easy to use for both collection managers and users. As a result, it must be possible to classify documents automatically within a classification scheme. Furthermore, there must be a straightforward and intuitive interface with which the user may use the scheme to assist in information retrieval (IR).
  10. Beppler, F.D.; Fonseca, F.T.; Pacheco, R.C.S.: Hermeneus: an architecture for an ontology-enabled information retrieval (2008) 0.02
    0.024371484 = product of:
      0.064990625 = sum of:
        0.033692583 = weight(_text_:retrieval in 3261) [ClassicSimilarity], result of:
          0.033692583 = score(doc=3261,freq=6.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.34732026 = fieldWeight in 3261, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3261)
        0.018263152 = product of:
          0.036526304 = sum of:
            0.036526304 = weight(_text_:system in 3261) [ClassicSimilarity], result of:
              0.036526304 = score(doc=3261,freq=6.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.36163113 = fieldWeight in 3261, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3261)
          0.5 = coord(1/2)
        0.013034889 = product of:
          0.026069777 = sum of:
            0.026069777 = weight(_text_:22 in 3261) [ClassicSimilarity], result of:
              0.026069777 = score(doc=3261,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.23214069 = fieldWeight in 3261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3261)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Ontologies improve IR systems regarding its retrieval and presentation of information, which make the task of finding information more effective, efficient, and interactive. In this paper we argue that ontologies also greatly improve the engineering of such systems. We created a framework that uses ontology to drive the process of engineering an IR system. We developed a prototype that shows how a domain specialist without knowledge in the IR field can build an IR system with interactive components. The resulting system provides support for users not only to find their information needs but also to extend their state of knowledge. This way, our approach to ontology-enabled information retrieval addresses both the engineering aspect described here and also the usability aspect described elsewhere.
    Date
    28.11.2016 12:43:22
  11. Mas, S.; Zaher, L'H.; Zacklad, M.: Design & evaluation of multi-viewed knowledge system for administrative electronic document organization (2008) 0.02
    0.022282436 = product of:
      0.089129746 = sum of:
        0.025936563 = weight(_text_:retrieval in 2480) [ClassicSimilarity], result of:
          0.025936563 = score(doc=2480,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.26736724 = fieldWeight in 2480, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2480)
        0.06319319 = sum of:
          0.028117962 = weight(_text_:system in 2480) [ClassicSimilarity], result of:
            0.028117962 = score(doc=2480,freq=2.0), product of:
              0.10100432 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.032069415 = queryNorm
              0.27838376 = fieldWeight in 2480, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0625 = fieldNorm(doc=2480)
          0.03507523 = weight(_text_:29 in 2480) [ClassicSimilarity], result of:
            0.03507523 = score(doc=2480,freq=2.0), product of:
              0.11281017 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.032069415 = queryNorm
              0.31092256 = fieldWeight in 2480, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0625 = fieldNorm(doc=2480)
      0.25 = coord(2/8)
    
    Abstract
    This communication describes part of a current research carried out at the Université de Technologie de Troyes and funded by a postdoctoral grant from the Fonds québécois de la recherche sur la société et la culture. Under the title "Design and evaluation of a faceted classification for uniform and personal organization of administrative electronic documents", our research investigates the feasibility of creating a faceted and multi-points-of-view classification scheme for administrative document organization and retrieval in online environments.
    Date
    29. 8.2009 21:15:48
  12. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.02
    0.019469779 = product of:
      0.077879116 = sum of:
        0.056154303 = weight(_text_:retrieval in 5865) [ClassicSimilarity], result of:
          0.056154303 = score(doc=5865,freq=6.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.5788671 = fieldWeight in 5865, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=5865)
        0.021724815 = product of:
          0.04344963 = sum of:
            0.04344963 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.04344963 = score(doc=5865,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
  13. Priss, U.: Faceted knowledge representation (1999) 0.02
    0.0188263 = product of:
      0.05020347 = sum of:
        0.02269449 = weight(_text_:retrieval in 2654) [ClassicSimilarity], result of:
          0.02269449 = score(doc=2654,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.23394634 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.012301609 = product of:
          0.024603218 = sum of:
            0.024603218 = weight(_text_:system in 2654) [ClassicSimilarity], result of:
              0.024603218 = score(doc=2654,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.2435858 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
        0.01520737 = product of:
          0.03041474 = sum of:
            0.03041474 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.03041474 = score(doc=2654,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.2708308 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  14. Kottmann, N.; Studer, T.: Improving semantic query answering (2006) 0.02
    0.018013723 = product of:
      0.07205489 = sum of:
        0.057995915 = weight(_text_:retrieval in 3979) [ClassicSimilarity], result of:
          0.057995915 = score(doc=3979,freq=10.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.59785134 = fieldWeight in 3979, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3979)
        0.014058981 = product of:
          0.028117962 = sum of:
            0.028117962 = weight(_text_:system in 3979) [ClassicSimilarity], result of:
              0.028117962 = score(doc=3979,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.27838376 = fieldWeight in 3979, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3979)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The retrieval problem is one of the main reasoning tasks for knowledge base systems. Given a knowledge base K and a concept C, the retrieval problem consists of finding all individuals a for which K logically entails C(a). We present an approach to answer retrieval queries over (a restriction of) OWL ontologies. Our solution is based on reducing the retrieval problem to a problem of evaluating an SQL query over a database constructed from the original knowledge base. We provide complete answers to retrieval problems. Still, our system performs very well as is shown by a standard benchmark.
  15. Schirrmeister, N.-P.; Keil, S.: Aufbau einer Infrastruktur für Information Retrieval-Evaluationen (2012) 0.02
    0.018013723 = product of:
      0.07205489 = sum of:
        0.057995915 = weight(_text_:retrieval in 3097) [ClassicSimilarity], result of:
          0.057995915 = score(doc=3097,freq=10.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.59785134 = fieldWeight in 3097, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3097)
        0.014058981 = product of:
          0.028117962 = sum of:
            0.028117962 = weight(_text_:system in 3097) [ClassicSimilarity], result of:
              0.028117962 = score(doc=3097,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.27838376 = fieldWeight in 3097, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3097)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Das Projekt "Aufbau einer Infrastruktur für Information Retrieval-Evaluationen" (AIIRE) bietet eine Softwareinfrastruktur zur Unterstützung von Information Retrieval-Evaluationen (IR-Evaluationen). Die Infrastruktur basiert auf einem Tool-Kit, das bei GESIS im Rahmen des DFG-Projekts IRM entwickelt wurde. Ziel ist es, ein System zu bieten, das zur Forschung und Lehre am Fachbereich Media für IR-Evaluationen genutzt werden kann. This paper describes some aspects of a project called "Aufbau einer Infrastruktur für Information Retrieval-Evaluationen" (AIIRE). Its goal is to build a software-infrastructure which supports the evaluation of information retrieval algorithms.
  16. Linden, E.J. van der; Vliegen, R.; Wijk, J.J. van: Visual Universal Decimal Classification (2007) 0.02
    0.01714274 = product of:
      0.06857096 = sum of:
        0.016210351 = weight(_text_:retrieval in 548) [ClassicSimilarity], result of:
          0.016210351 = score(doc=548,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.16710453 = fieldWeight in 548, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=548)
        0.052360605 = sum of:
          0.030438587 = weight(_text_:system in 548) [ClassicSimilarity], result of:
            0.030438587 = score(doc=548,freq=6.0), product of:
              0.10100432 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.032069415 = queryNorm
              0.30135927 = fieldWeight in 548, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0390625 = fieldNorm(doc=548)
          0.021922018 = weight(_text_:29 in 548) [ClassicSimilarity], result of:
            0.021922018 = score(doc=548,freq=2.0), product of:
              0.11281017 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.032069415 = queryNorm
              0.19432661 = fieldWeight in 548, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0390625 = fieldNorm(doc=548)
      0.25 = coord(2/8)
    
    Abstract
    UDC aims to be a consistent and complete classification system, that enables practitioners to classify documents swiftly and smoothly. The eventual goal of UDC is to enable the public at large to retrieve documents from large collections of documents that are classified with UDC. The large size of the UDC Master Reference File, MRF with over 66.000 records, makes it difficult to obtain an overview and to understand its structure. Moreover, finding the right classification in MRF turns out to be difficult in practice. Last but not least, retrieval of documents requires insight and understanding of the coding system. Visualization is an effective means to support the development of UDC as well as its use by practitioners. Moreover, visualization offers possibilities to use the classification without use of the coding system as such. MagnaView has developed an application which demonstrates the use of interactive visualization to face these challenges. In our presentation, we discuss these challenges, and we give a demonstration of the way the application helps face these. Examples of visualizations can be found below.
    Source
    Extensions and corrections to the UDC. 29(2007), S.297-300
  17. Paralic, J.; Kostial, I.: Ontology-based information retrieval (2003) 0.02
    0.017035883 = product of:
      0.06814353 = sum of:
        0.050746426 = weight(_text_:retrieval in 1153) [ClassicSimilarity], result of:
          0.050746426 = score(doc=1153,freq=10.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.5231199 = fieldWeight in 1153, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1153)
        0.017397102 = product of:
          0.034794204 = sum of:
            0.034794204 = weight(_text_:system in 1153) [ClassicSimilarity], result of:
              0.034794204 = score(doc=1153,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.34448233 = fieldWeight in 1153, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1153)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In the proposed article a new, ontology-based approach to information retrieval (IR) is presented. The system is based on a domain knowledge representation schema in form of ontology. New resources registered within the system are linked to concepts from this ontology. In such a way resources may be retrieved based on the associations and not only based on partial or exact term matching as the use of vector model presumes In order to evaluate the quality of this retrieval mechanism, experiments to measure retrieval efficiency have been performed with well-known Cystic Fibrosis collection of medical scientific papers. The ontology-based retrieval mechanism has been compared with traditional full text search based on vector IR model as well as with the Latent Semantic Indexing method.
  18. Furner, J.: User tagging of library resources : toward a framework for system evaluation (2007) 0.02
    0.01671183 = product of:
      0.06684732 = sum of:
        0.019452421 = weight(_text_:retrieval in 703) [ClassicSimilarity], result of:
          0.019452421 = score(doc=703,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.20052543 = fieldWeight in 703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=703)
        0.047394894 = sum of:
          0.021088472 = weight(_text_:system in 703) [ClassicSimilarity], result of:
            0.021088472 = score(doc=703,freq=2.0), product of:
              0.10100432 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.032069415 = queryNorm
              0.20878783 = fieldWeight in 703, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.046875 = fieldNorm(doc=703)
          0.02630642 = weight(_text_:29 in 703) [ClassicSimilarity], result of:
            0.02630642 = score(doc=703,freq=2.0), product of:
              0.11281017 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.032069415 = queryNorm
              0.23319192 = fieldWeight in 703, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=703)
      0.25 = coord(2/8)
    
    Abstract
    Although user tagging of library resources shows substantial promise as a means of improving the quality of users' access to those resources, several important questions about the level and nature of the warrant for basing retrieval tools on user tagging are yet to receive full consideration by library practitioners and researchers. Among these is the simple evaluative question: What, specifically, are the factors that determine whether or not user-tagging services will be successful? If success is to be defined in terms of the effectiveness with which systems perform the particular functions expected of them (rather than simply in terms of popularity), an understanding is needed both of the multifunctional nature of tagging tools, and of the complex nature of users' mental models of that multifunctionality. In this paper, a conceptual framework is developed for the evaluation of systems that integrate user tagging with more traditional methods of library resource description.
    Date
    26.12.2011 13:29:31
  19. Mayr, P.; Petras, V.: Cross-concordances : terminology mapping and its effectiveness for information retrieval (2008) 0.02
    0.01671183 = product of:
      0.06684732 = sum of:
        0.019452421 = weight(_text_:retrieval in 2323) [ClassicSimilarity], result of:
          0.019452421 = score(doc=2323,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.20052543 = fieldWeight in 2323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2323)
        0.047394894 = sum of:
          0.021088472 = weight(_text_:system in 2323) [ClassicSimilarity], result of:
            0.021088472 = score(doc=2323,freq=2.0), product of:
              0.10100432 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.032069415 = queryNorm
              0.20878783 = fieldWeight in 2323, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.046875 = fieldNorm(doc=2323)
          0.02630642 = weight(_text_:29 in 2323) [ClassicSimilarity], result of:
            0.02630642 = score(doc=2323,freq=2.0), product of:
              0.11281017 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.032069415 = queryNorm
              0.23319192 = fieldWeight in 2323, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=2323)
      0.25 = coord(2/8)
    
    Abstract
    The German Federal Ministry for Education and Research funded a major terminology mapping initiative, which found its conclusion in 2007. The task of this terminology mapping initiative was to organize, create and manage 'cross-concordances' between controlled vocabularies (thesauri, classification systems, subject heading lists) centred around the social sciences but quickly extending to other subject areas. 64 crosswalks with more than 500,000 relations were established. In the final phase of the project, a major evaluation effort to test and measure the effectiveness of the vocabulary mappings in an information system environment was conducted. The paper reports on the cross-concordance work and evaluation results.
    Date
    26.12.2011 13:33:29
  20. Rindflesch, T.C.; Aronson, A.R.: Semantic processing in information retrieval (1993) 0.02
    0.016522959 = product of:
      0.066091835 = sum of:
        0.050746426 = weight(_text_:retrieval in 4121) [ClassicSimilarity], result of:
          0.050746426 = score(doc=4121,freq=10.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.5231199 = fieldWeight in 4121, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
        0.015345411 = product of:
          0.030690823 = sum of:
            0.030690823 = weight(_text_:29 in 4121) [ClassicSimilarity], result of:
              0.030690823 = score(doc=4121,freq=2.0), product of:
                0.11281017 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.032069415 = queryNorm
                0.27205724 = fieldWeight in 4121, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4121)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Intuition suggests that one way to enhance the information retrieval process would be the use of phrases to characterize the contents of text. A number of researchers, however, have noted that phrases alone do not improve retrieval effectiveness. In this paper we briefly review the use of phrases in information retrieval and then suggest extensions to this paradigm using semantic information. We claim that semantic processing, which can be viewed as expressing relations between the concepts represented by phrases, will in fact enhance retrieval effectiveness. The availability of the UMLS® domain model, which we exploit extensively, significantly contributes to the feasibility of this processing.
    Date
    29. 6.2015 14:51:28

Years

Languages

  • e 414
  • d 230
  • a 3
  • el 3
  • i 3
  • es 1
  • nl 1
  • More… Less…

Types

  • a 309
  • i 23
  • x 16
  • r 14
  • s 11
  • m 9
  • p 6
  • b 4
  • n 2
  • More… Less…

Themes