Search (39 results, page 1 of 2)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Multilinguale Probleme"
  1. Peters, C.; Braschler, M.; Clough, P.: Multilingual information retrieval : from research to practice (2012) 0.03
    0.031534832 = product of:
      0.1103719 = sum of:
        0.025709987 = weight(_text_:wide in 361) [ClassicSimilarity], result of:
          0.025709987 = score(doc=361,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 361, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=361)
        0.029287368 = weight(_text_:elektronische in 361) [ClassicSimilarity], result of:
          0.029287368 = score(doc=361,freq=2.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.20899329 = fieldWeight in 361, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.03125 = fieldNorm(doc=361)
        0.015630832 = weight(_text_:information in 361) [ClassicSimilarity], result of:
          0.015630832 = score(doc=361,freq=30.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3004734 = fieldWeight in 361, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=361)
        0.03974371 = weight(_text_:retrieval in 361) [ClassicSimilarity], result of:
          0.03974371 = score(doc=361,freq=22.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.44337842 = fieldWeight in 361, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=361)
      0.2857143 = coord(4/14)
    
    Abstract
    We are living in a multilingual world and the diversity in languages which are used to interact with information access systems has generated a wide variety of challenges to be addressed by computer and information scientists. The growing amount of non-English information accessible globally and the increased worldwide exposure of enterprises also necessitates the adaptation of Information Retrieval (IR) methods to new, multilingual settings.Peters, Braschler and Clough present a comprehensive description of the technologies involved in designing and developing systems for Multilingual Information Retrieval (MLIR). They provide readers with broad coverage of the various issues involved in creating systems to make accessible digitally stored materials regardless of the language(s) they are written in. Details on Cross-Language Information Retrieval (CLIR) are also covered that help readers to understand how to develop retrieval systems that cross language boundaries. Their work is divided into six chapters and accompanies the reader step-by-step through the various stages involved in building, using and evaluating MLIR systems. The book concludes with some examples of recent applications that utilise MLIR technologies. Some of the techniques described have recently started to appear in commercial search systems, while others have the potential to be part of future incarnations.The book is intended for graduate students, scholars, and practitioners with a basic understanding of classical text retrieval methods. It offers guidelines and information on all aspects that need to be taken into consideration when building MLIR systems, while avoiding too many 'hands-on details' that could rapidly become obsolete. Thus it bridges the gap between the material covered by most of the classical IR textbooks and the novel requirements related to the acquisition and dissemination of information in whatever language it is stored.
    Content
    Inhalt: 1 Introduction 2 Within-Language Information Retrieval 3 Cross-Language Information Retrieval 4 Interaction and User Interfaces 5 Evaluation for Multilingual Information Retrieval Systems 6 Applications of Multilingual Information Access
    Footnote
    Elektronische Ausgabe unter: http://springer.r.delivery.net/r/r?2.1.Ee.2Tp.1gd0L5.C3WE8i..N.WdtG.3uq2.bW89MQ%5f%5fCXWIFOJ0.
    RSWK
    Information-Retrieval-System / Mehrsprachigkeit / Abfrage / Zugriff
    Subject
    Information-Retrieval-System / Mehrsprachigkeit / Abfrage / Zugriff
  2. Franz, G.: ¬Die vielen Wikipedias : Vielsprachigkeit als Zugang zu einer globalisierten Online-Welt (2011) 0.01
    0.012729125 = product of:
      0.05940258 = sum of:
        0.013948122 = weight(_text_:web in 568) [ClassicSimilarity], result of:
          0.013948122 = score(doc=568,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.14422815 = fieldWeight in 568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=568)
        0.041418593 = weight(_text_:elektronische in 568) [ClassicSimilarity], result of:
          0.041418593 = score(doc=568,freq=4.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.29556113 = fieldWeight in 568, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.03125 = fieldNorm(doc=568)
        0.0040358636 = weight(_text_:information in 568) [ClassicSimilarity], result of:
          0.0040358636 = score(doc=568,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.0775819 = fieldWeight in 568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=568)
      0.21428572 = coord(3/14)
    
    Abstract
    Mittlerweile sind mehr als zehn Jahre seit der Gründung der Wikipedia vergangen. Die kollaborativ zusammengestellte Online-Enzyklopädie blickt auf eine beispiellose Erfolgsgeschichte zurück und hat inzwischen sogar viele klassische Lexika das Fürchten gelehrt. Aber: Es gibt gar nicht die eine Wikipedia! Stattdessen handelt es sich bei dem Projekt um hunderte verschiedene, weitgehend voneinander unabhängig agierende Sprachversionen. Sie sind nicht nur unterschiedlich groß, sondern bestehen auch aus differierenden Inhalten. Artikel zu ein und demselben Stichwort können von Wikipedia zu Wikipedia erheblich voneinander abweichen. Von der Gemeinschaft bereits erarbeitetes Wissen steht daher nicht allen Nutzern in der Welt gleichermaßen zur Verfügung. Mit einem forcierten interlingualen Wissensaustausch ließe es sich aber für eine gegenseitige Bereicherung der Wikipedias nutzen. Das Buch gibt zunächst einen allgemeinen Überblick zur Wikipedia, geht auf ihre Entstehung, ihre Funktionsweise und auf die beteiligten Akteure ein. Auch das "Erfolgsgeheimnis" des Nachschlagewerks sowie aktuelle Herausforderungen werden herausgearbeitet. Die anschließende Untersuchung zeigt, wie sehr verschieden große Wikipedias voneinander differieren und wo die Unterschiede im Einzelnen liegen. Danach folgt eine Vorstellung von Ansätzen, Werkzeugen und Schwierigkeiten des interlingualen Wissensaustauschs zwischen den Sprachversionen. Der letzte Teil entwirft schließlich ein detailliertes Konzept für einen neuartigen Wissensaustausch, das aus mehreren ineinandergreifenden Komponenten rund um den Kern einer speziellen Übersetzungsoberfläche besteht. Das Konzept lässt sich auch als Blaupause für Lokalisierungsbemühungen multisprachlicher Wikis nutzen, wie sie international operierende Unternehmungen zunehmend einsetzen. Die Ausarbeitung, auf der dieses Buch basiert, wurde mit dem FHP-Preis 2011 für die beste Abschlussarbeit im Studiengang "Information und Dokumentation" der FH Potsdam ausgezeichnet.
    BK
    05.38 (Neue elektronische Medien)
    Classification
    05.38 (Neue elektronische Medien)
    Series
    Reihe Web 2.0
  3. Ye, Z.; Huang, J.X.; He, B.; Lin, H.: Mining a multilingual association dictionary from Wikipedia for cross-language information retrieval (2012) 0.01
    0.011168014 = product of:
      0.0521174 = sum of:
        0.017435152 = weight(_text_:web in 513) [ClassicSimilarity], result of:
          0.017435152 = score(doc=513,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 513, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=513)
        0.008737902 = weight(_text_:information in 513) [ClassicSimilarity], result of:
          0.008737902 = score(doc=513,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16796975 = fieldWeight in 513, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=513)
        0.025944345 = weight(_text_:retrieval in 513) [ClassicSimilarity], result of:
          0.025944345 = score(doc=513,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.28943354 = fieldWeight in 513, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=513)
      0.21428572 = coord(3/14)
    
    Abstract
    Wikipedia is characterized by its dense link structure and a large number of articles in different languages, which make it a notable Web corpus for knowledge extraction and mining, in particular for mining the multilingual associations. In this paper, motivated by a psychological theory of word meaning, we propose a graph-based approach to constructing a cross-language association dictionary (CLAD) from Wikipedia, which can be used in a variety of cross-language accessing and processing applications. In order to evaluate the quality of the mined CLAD, and to demonstrate how the mined CLAD can be used in practice, we explore two different applications of the mined CLAD to cross-language information retrieval (CLIR). First, we use the mined CLAD to conduct cross-language query expansion; and, second, we use it to filter out translation candidates with low translation probabilities. Experimental results on a variety of standard CLIR test collections show that the CLIR retrieval performance can be substantially improved with the above two applications of CLAD, which indicates that the mined CLAD is of sound quality.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.12, S.2474-2487
  4. Frâncu, V.; Sabo, C.-N.: Implementation of a UDC-based multilingual thesaurus in a library catalogue : the case of BiblioPhil (2010) 0.01
    0.010986563 = product of:
      0.051270626 = sum of:
        0.012107591 = weight(_text_:information in 3697) [ClassicSimilarity], result of:
          0.012107591 = score(doc=3697,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 3697, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3697)
        0.031133216 = weight(_text_:retrieval in 3697) [ClassicSimilarity], result of:
          0.031133216 = score(doc=3697,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.34732026 = fieldWeight in 3697, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3697)
        0.008029819 = product of:
          0.024089456 = sum of:
            0.024089456 = weight(_text_:22 in 3697) [ClassicSimilarity], result of:
              0.024089456 = score(doc=3697,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.23214069 = fieldWeight in 3697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3697)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    In order to enhance the use of Universal Decimal Classification (UDC) numbers in information retrieval, the authors have represented classification with multilingual thesaurus descriptors and implemented this solution in an automated way. The authors illustrate a solution implemented in a BiblioPhil library system. The standard formats used are UNIMARC for subject authority records (i.e. the UDC-based multilingual thesaurus) and MARC XML support for data transfer. The multilingual thesaurus was built according to existing standards, the constituent parts of the classification notations being used as the basis for search terms in the multilingual information retrieval. The verbal equivalents, descriptors and non-descriptors, are used to expand the number of concepts and are given in Romanian, English and French. This approach saves the time of the indexer and provides more user-friendly and easier access to the bibliographic information. The multilingual aspect of the thesaurus enhances information access for a greater number of online users
    Date
    22. 7.2010 20:40:56
    Theme
    Klassifikationssysteme im Online-Retrieval
  5. Rettinger, A.; Schumilin, A.; Thoma, S.; Ell, B.: Learning a cross-lingual semantic representation of relations expressed in text (2015) 0.01
    0.0100695435 = product of:
      0.0704868 = sum of:
        0.06039714 = weight(_text_:web in 2027) [ClassicSimilarity], result of:
          0.06039714 = score(doc=2027,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.6245262 = fieldWeight in 2027, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2027)
        0.010089659 = weight(_text_:information in 2027) [ClassicSimilarity], result of:
          0.010089659 = score(doc=2027,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 2027, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=2027)
      0.14285715 = coord(2/14)
    
    Series
    Information Systems and Applications, incl. Internet/Web, and HCI; Bd. 9088
    Source
    The Semantic Web: latest advances and new domains. 12th European Semantic Web Conference, ESWC 2015 Portoroz, Slovenia, May 31 -- June 4, 2015. Proceedings. Eds.: F. Gandon u.a
  6. Flores, F.N.; Moreira, V.P.: Assessing the impact of stemming accuracy on information retrieval : a multilingual perspective (2016) 0.01
    0.008408247 = product of:
      0.058857724 = sum of:
        0.01482871 = weight(_text_:information in 3187) [ClassicSimilarity], result of:
          0.01482871 = score(doc=3187,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2850541 = fieldWeight in 3187, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3187)
        0.044029012 = weight(_text_:retrieval in 3187) [ClassicSimilarity], result of:
          0.044029012 = score(doc=3187,freq=12.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.49118498 = fieldWeight in 3187, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3187)
      0.14285715 = coord(2/14)
    
    Abstract
    The quality of stemming algorithms is typically measured in two different ways: (i) how accurately they map the variant forms of a word to the same stem; or (ii) how much improvement they bring to Information Retrieval systems. In this article, we evaluate various stemming algorithms, in four languages, in terms of accuracy and in terms of their aid to Information Retrieval. The aim is to assess whether the most accurate stemmers are also the ones that bring the biggest gain in Information Retrieval. Experiments in English, French, Portuguese, and Spanish show that this is not always the case, as stemmers with higher error rates yield better retrieval quality. As a byproduct, we also identified the most accurate stemmers and the best for Information Retrieval purposes.
    Source
    Information processing and management. 52(2016) no.5, S.840-854
  7. Luca, E.W. de; Dahlberg, I.: ¬Die Multilingual Lexical Linked Data Cloud : eine mögliche Zugangsoptimierung? (2014) 0.01
    0.008038578 = product of:
      0.03751336 = sum of:
        0.020922182 = weight(_text_:web in 1736) [ClassicSimilarity], result of:
          0.020922182 = score(doc=1736,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 1736, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1736)
        0.00856136 = weight(_text_:information in 1736) [ClassicSimilarity], result of:
          0.00856136 = score(doc=1736,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 1736, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1736)
        0.008029819 = product of:
          0.024089456 = sum of:
            0.024089456 = weight(_text_:22 in 1736) [ClassicSimilarity], result of:
              0.024089456 = score(doc=1736,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.23214069 = fieldWeight in 1736, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1736)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Sehr viele Informationen sind bereits im Web verfügbar oder können aus isolierten strukturierten Datenspeichern wie Informationssystemen und sozialen Netzwerken gewonnen werden. Datenintegration durch Nachbearbeitung oder durch Suchmechanismen (z. B. D2R) ist deshalb wichtig, um Informationen allgemein verwendbar zu machen. Semantische Technologien ermöglichen die Verwendung definierter Verbindungen (typisierter Links), durch die ihre Beziehungen zueinander festgehalten werden, was Vorteile für jede Anwendung bietet, die das in Daten enthaltene Wissen wieder verwenden kann. Um ­eine semantische Daten-Landkarte herzustellen, benötigen wir Wissen über die einzelnen Daten und ihre Beziehung zu anderen Daten. Dieser Beitrag stellt unsere Arbeit zur Benutzung von Lexical Linked Data (LLD) durch ein Meta-Modell vor, das alle Ressourcen enthält und zudem die Möglichkeit bietet sie unter unterschiedlichen Gesichtspunkten aufzufinden. Wir verbinden damit bestehende Arbeiten über Wissensgebiete (basierend auf der Information Coding Classification) mit der Multilingual Lexical Linked Data Cloud (basierend auf der RDF/OWL-Repräsentation von EuroWordNet und den ähnlichen integrierten lexikalischen Ressourcen MultiWordNet, MEMODATA und die Hamburg Metapher DB).
    Date
    22. 9.2014 19:00:13
    Source
    Information - Wissenschaft und Praxis. 65(2014) H.4/5, S.279-287
  8. Wang, J.; Oard, D.W.: Matching meaning for cross-language information retrieval (2012) 0.01
    0.008009522 = product of:
      0.05606665 = sum of:
        0.014125523 = weight(_text_:information in 7430) [ClassicSimilarity], result of:
          0.014125523 = score(doc=7430,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27153665 = fieldWeight in 7430, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7430)
        0.04194113 = weight(_text_:retrieval in 7430) [ClassicSimilarity], result of:
          0.04194113 = score(doc=7430,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.46789268 = fieldWeight in 7430, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7430)
      0.14285715 = coord(2/14)
    
    Abstract
    This article describes a framework for cross-language information retrieval that efficiently leverages statistical estimation of translation probabilities. The framework provides a unified perspective into which some earlier work on techniques for cross-language information retrieval based on translation probabilities can be cast. Modeling synonymy and filtering translation probabilities using bidirectional evidence are shown to yield a balance between retrieval effectiveness and query-time (or indexing-time) efficiency that seems well suited large-scale applications. Evaluations with six test collections show consistent improvements over strong baselines.
    Source
    Information processing and management. 48(2012) no.4, S.631-653
  9. De Luca, E.W.; Dahlberg, I.: Including knowledge domains from the ICC into the multilingual lexical linked data cloud (2014) 0.01
    0.00792601 = product of:
      0.036988042 = sum of:
        0.017435152 = weight(_text_:web in 1493) [ClassicSimilarity], result of:
          0.017435152 = score(doc=1493,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 1493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1493)
        0.010089659 = weight(_text_:information in 1493) [ClassicSimilarity], result of:
          0.010089659 = score(doc=1493,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 1493, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1493)
        0.009463232 = product of:
          0.028389696 = sum of:
            0.028389696 = weight(_text_:22 in 1493) [ClassicSimilarity], result of:
              0.028389696 = score(doc=1493,freq=4.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.27358043 = fieldWeight in 1493, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1493)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    A lot of information that is already available on the Web, or retrieved from local information systems and social networks is structured in data silos that are not semantically related. Semantic technologies make it emerge that the use of typed links that directly express their relations are an advantage for every application that can reuse the incorporated knowledge about the data. For this reason, data integration, through reengineering (e.g. triplify), or querying (e.g. D2R) is an important task in order to make information available for everyone. Thus, in order to build a semantic map of the data, we need knowledge about data items itself and the relation between heterogeneous data items. In this paper, we present our work of providing Lexical Linked Data (LLD) through a meta-model that contains all the resources and gives the possibility to retrieve and navigate them from different perspectives. We combine the existing work done on knowledge domains (based on the Information Coding Classification) within the Multilingual Lexical Linked Data Cloud (based on the RDF/OWL EurowordNet and the related integrated lexical resources (MultiWordNet, EuroWordNet, MEMODATA Lexicon, Hamburg Methaphor DB).
    Date
    22. 9.2014 19:01:18
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  10. Fluhr, C.: Crosslingual access to photo databases (2012) 0.01
    0.0068696537 = product of:
      0.032058384 = sum of:
        0.0060537956 = weight(_text_:information in 93) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=93,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 93, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=93)
        0.01797477 = weight(_text_:retrieval in 93) [ClassicSimilarity], result of:
          0.01797477 = score(doc=93,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 93, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=93)
        0.008029819 = product of:
          0.024089456 = sum of:
            0.024089456 = weight(_text_:22 in 93) [ClassicSimilarity], result of:
              0.024089456 = score(doc=93,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.23214069 = fieldWeight in 93, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=93)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Date
    17. 4.2012 14:25:22
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  11. Stiller, J.; Gäde, M.; Petras, V.: Multilingual access to digital libraries : the Europeana use case (2013) 0.01
    0.006527507 = product of:
      0.04569255 = sum of:
        0.038629785 = weight(_text_:bibliothek in 902) [ClassicSimilarity], result of:
          0.038629785 = score(doc=902,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.31752092 = fieldWeight in 902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0546875 = fieldNorm(doc=902)
        0.0070627616 = weight(_text_:information in 902) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=902,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=902)
      0.14285715 = coord(2/14)
    
    Abstract
    Der Artikel fasst Komponenten für einen mehrsprachigen Zugang in digitalen Bibliotheken zusammen. Dabei wird der Fokus auf Bibliotheken für das digitale Kulturerbe gelegt. Eine Analyse aktueller (existierender) Informationssysteme im sogenannten GLAM-Bereich (Galerien, Bibliotheken, Archive, Museen) beschreibt angewandte Lösungen für die Recherche (Suchen und Blättern) von und die Interaktion mit mehrsprachigen Inhalten. Europeana, die europäische digitale Bibliothek für Kulturerbe, wird als Fallbeispiel hervorgehoben und es werden beispielhaft Interaktionsszenarios für die mehrsprachige Recherche vorgestellt. Die Herausforderungen in der Implementierung von Komponenten für den mehrsprachigen Informationszugang sowie Empfehlungen für den verbesserten Einsatz werden vorgestellt und diskutiert.
    Source
    Information - Wissenschaft und Praxis. 64(2013) H.2/3, S.86-95
  12. Huckstorf, A.; Petras, V.: Mind the lexical gap : EuroVoc Building Block of the Semantic Web (2011) 0.01
    0.00639995 = product of:
      0.04479965 = sum of:
        0.036238287 = weight(_text_:web in 2782) [ClassicSimilarity], result of:
          0.036238287 = score(doc=2782,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.37471575 = fieldWeight in 2782, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2782)
        0.00856136 = weight(_text_:information in 2782) [ClassicSimilarity], result of:
          0.00856136 = score(doc=2782,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 2782, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2782)
      0.14285715 = coord(2/14)
    
    Abstract
    Ein Konferenzereignis der besonderen Art fand am 18. und 19. November 2010 in Luxemburg statt. Initiiert durch das Amt für Veröffentlichungen der Europäischen Union (http://publications.europa.eu) waren Bibliothekare und Information Professionals eingeladen, um über die Zukunft mehrsprachiger kontrollierter Vokabulare in Informationssystemen und insbesondere deren Beitrag zum Semantic Web zu diskutieren. Organisiert wurde die Konferenz durch das EuroVoc-Team, das den Thesaurus der Europäischen Union bearbeitet. Die letzte EuroVoc-Konferenz fand im Jahr 2006 statt. In der Zwischenzeit ist EuroVoc zu einem ontologie-basierten Thesaurusmanagementsystem übergegangen und hat systematisch begonnen, Semantic-Web-Technologien für die Bearbeitung und Repräsentation einzusetzen und sich mit anderen Vokabularen zu vernetzen. Ein produktiver Austausch fand mit den Produzenten anderer europäischer und internationaler Vokabulare (z.B. United Nations oder FAO) sowie Vertretern aus Projekten, die an Themen über automatische Indexierung (hier insbesondere parlamentarische und rechtliche Dokumente) sowie Interoperabilitiät zwischen Vokabularen arbeiten, statt.
    Source
    Information - Wissenschaft und Praxis. 62(2011) H.2/3, S.125-126
  13. Pika, J.; Pika-Biolzi, M.: Multilingual subject access and classification-based browsing through authority control : the experience of the ETH-Bibliothek, Zürich (2015) 0.01
    0.006295258 = product of:
      0.044066805 = sum of:
        0.039021976 = weight(_text_:bibliothek in 2295) [ClassicSimilarity], result of:
          0.039021976 = score(doc=2295,freq=4.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.32074454 = fieldWeight in 2295, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2295)
        0.0050448296 = weight(_text_:information in 2295) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=2295,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 2295, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2295)
      0.14285715 = coord(2/14)
    
    Abstract
    The paper provides an illustration of the benefits of subject authority control improving multilingual subject access in NEBIS - Netzwerk von Bibliotheken und Informationsstellen in der Schweiz. This example of good practice focuses on some important aspects of classification and indexing. NEBIS subject authorities comprise a classification scheme and multilingual subject descriptor system. A bibliographic system supported by subject authority control empowers libraries as it enables them to expand and adjust vocabulary and link subjects to suit their specific audience. Most importantly it allows the management of different subject vocabularies in numerous languages. In addition, such an enriched subject index creates re-usable and shareable source of subject statements that has value in the wider context of information exchange. The illustrations and supporting arguments are based on indexing practice, subject authority control and use of classification in ETH-Bibliothek, which is the largest library within the NEBIS network.
  14. Zhou, Y. et al.: Analysing entity context in multilingual Wikipedia to support entity-centric retrieval applications (2016) 0.01
    0.006191569 = product of:
      0.04334098 = sum of:
        0.029957948 = weight(_text_:retrieval in 2758) [ClassicSimilarity], result of:
          0.029957948 = score(doc=2758,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.33420905 = fieldWeight in 2758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=2758)
        0.013383033 = product of:
          0.040149096 = sum of:
            0.040149096 = weight(_text_:22 in 2758) [ClassicSimilarity], result of:
              0.040149096 = score(doc=2758,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.38690117 = fieldWeight in 2758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2758)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Date
    1. 2.2016 18:25:22
  15. Vassilakaki, E.; Garoufallou, E.; Johnson, F.; Hartley, R.J.: ¬An exploration of users' needs for multilingual information retrieval and access (2015) 0.01
    0.006077555 = product of:
      0.042542882 = sum of:
        0.01712272 = weight(_text_:information in 2394) [ClassicSimilarity], result of:
          0.01712272 = score(doc=2394,freq=16.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3291521 = fieldWeight in 2394, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2394)
        0.025420163 = weight(_text_:retrieval in 2394) [ClassicSimilarity], result of:
          0.025420163 = score(doc=2394,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.2835858 = fieldWeight in 2394, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2394)
      0.14285715 = coord(2/14)
    
    Abstract
    The need for promoting Multilingual Information Retrieval (MLIR) and Access (MLIA) has become evident, now more than ever, given the increase of the online information produced daily in languages other than English. This study aims to explore users' information needs when searching for information across languages. Specifically, the method of questionnaire was employed to shed light on the Library and Information Science (LIS) undergraduate students' use of search engines, databases, digital libraries when searching as well as their needs for multilingual access. This study contributes in informing the design of MLIR systems by focusing on the reasons and situations under which users would search and use information in multiple languages.
    Series
    Communications in computer and information science; 544
  16. Luo, M.M.; Nahl, D.: Let's Google : uncertainty and bilingual search (2019) 0.01
    0.005749839 = product of:
      0.04024887 = sum of:
        0.01482871 = weight(_text_:information in 5363) [ClassicSimilarity], result of:
          0.01482871 = score(doc=5363,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2850541 = fieldWeight in 5363, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5363)
        0.025420163 = weight(_text_:retrieval in 5363) [ClassicSimilarity], result of:
          0.025420163 = score(doc=5363,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.2835858 = fieldWeight in 5363, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5363)
      0.14285715 = coord(2/14)
    
    Abstract
    This study applies Kuhlthau's Information Search Process stage (ISP) model to understand bilingual users' Internet search experience. We conduct a quasi-field experiment with 30 bilingual searchers and the results suggested that the ISP model was applicable in studying searchers' information retrieval behavior in search tasks. The ISP model was applicable in studying searchers' information retrieval behavior in simple tasks. However, searchers' emotional responses differed from those of the ISP model for a complex task. By testing searchers using different search strategies, the results suggested that search engines with multilanguage search functions provide an advantage for bilingual searchers in the Internet's multilingual environment. The findings showed that when searchers used a search engine as a tool for problem solving, they might experience different feelings in each ISP stage than in searching for information for a term paper using a library. The results echo other research findings that indicate that information seeking is a multifaceted phenomenon.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.9, S.1014-1025
  17. Gupta, P.; Banchs, R.E.; Rosso, P.: Continuous space models for CLIR (2017) 0.01
    0.005670654 = product of:
      0.039694577 = sum of:
        0.00856136 = weight(_text_:information in 3295) [ClassicSimilarity], result of:
          0.00856136 = score(doc=3295,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 3295, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3295)
        0.031133216 = weight(_text_:retrieval in 3295) [ClassicSimilarity], result of:
          0.031133216 = score(doc=3295,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.34732026 = fieldWeight in 3295, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3295)
      0.14285715 = coord(2/14)
    
    Abstract
    We present and evaluate a novel technique for learning cross-lingual continuous space models to aid cross-language information retrieval (CLIR). Our model, which is referred to as external-data composition neural network (XCNN), is based on a composition function that is implemented on top of a deep neural network that provides a distributed learning framework. Different from most existing models, which rely only on available parallel data for training, our learning framework provides a natural way to exploit monolingual data and its associated relevance metadata for learning continuous space representations of language. Cross-language extensions of the obtained models can then be trained by using a small set of parallel data. This property is very helpful for resource-poor languages, therefore, we carry out experiments on the English-Hindi language pair. On the conducted comparative evaluation, the proposed model is shown to outperform state-of-the-art continuous space models with statistically significant margin on two different tasks: parallel sentence retrieval and ad-hoc retrieval.
    Source
    Information processing and management. 53(2017) no.2, S.359-370
  18. Kim, S.; Ko, Y.; Oard, D.W.: Combining lexical and statistical translation evidence for cross-language information retrieval (2015) 0.01
    0.005361108 = product of:
      0.037527755 = sum of:
        0.012107591 = weight(_text_:information in 1606) [ClassicSimilarity], result of:
          0.012107591 = score(doc=1606,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 1606, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1606)
        0.025420163 = weight(_text_:retrieval in 1606) [ClassicSimilarity], result of:
          0.025420163 = score(doc=1606,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.2835858 = fieldWeight in 1606, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1606)
      0.14285715 = coord(2/14)
    
    Abstract
    This article explores how best to use lexical and statistical translation evidence together for cross-language information retrieval (CLIR). Lexical translation evidence is assembled from Wikipedia and from a large machine-readable dictionary, statistical translation evidence is drawn from parallel corpora, and evidence from co-occurrence in the document language provides a basis for limiting the adverse effect of translation ambiguity. Coverage statistics for NII Testbeds and Community for Information Access Research (NTCIR) queries confirm that these resources have complementary strengths. Experiments with translation evidence from a small parallel corpus indicate that even rather rough estimates of translation probabilities can yield further improvements over a strong technique for translation weighting based on using Jensen-Shannon divergence as a term-association measure. Finally, a novel approach to posttranslation query expansion using a random walk over the Wikipedia concept link graph is shown to yield further improvements over alternative techniques for posttranslation query expansion. Evaluation results on the NTCIR-5 English-Korean test collection show statistically significant improvements over strong baselines.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.23-39
  19. Tsai, M.-.F.; Chen, H.-H.; Wang, Y.-T.: Learning a merge model for multilingual information retrieval (2011) 0.01
    0.005317847 = product of:
      0.037224926 = sum of:
        0.011280581 = weight(_text_:information in 2750) [ClassicSimilarity], result of:
          0.011280581 = score(doc=2750,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21684799 = fieldWeight in 2750, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2750)
        0.025944345 = weight(_text_:retrieval in 2750) [ClassicSimilarity], result of:
          0.025944345 = score(doc=2750,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.28943354 = fieldWeight in 2750, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2750)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper proposes a learning approach for the merging process in multilingual information retrieval (MLIR). To conduct the learning approach, we present a number of features that may influence the MLIR merging process. These features are mainly extracted from three levels: query, document, and translation. After the feature extraction, we then use the FRank ranking algorithm to construct a merge model. To the best of our knowledge, this practice is the first attempt to use a learning-based ranking algorithm to construct a merge model for MLIR merging. In our experiments, three test collections for the task of crosslingual information retrieval (CLIR) in NTCIR3, 4, and 5 are employed to assess the performance of our proposed method. Moreover, several merging methods are also carried out for a comparison, including traditional merging methods, the 2-step merging strategy, and the merging method based on logistic regression. The experimental results show that our proposed method can significantly improve merging quality on two different types of datasets. In addition to the effectiveness, through the merge model generated by FRank, our method can further identify key factors that influence the merging process. This information might provide us more insight and understanding into MLIR merging.
    Source
    Information processing and management. 47(2011) no.5, S.635-646
  20. Hauer, M.: Zur Bedeutung normierter Terminologien in Zeiten moderner Sprach- und Information-Retrieval-Technologien (2013) 0.01
    0.00524566 = product of:
      0.036719617 = sum of:
        0.0070627616 = weight(_text_:information in 995) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=995,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=995)
        0.029656855 = weight(_text_:retrieval in 995) [ClassicSimilarity], result of:
          0.029656855 = score(doc=995,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.33085006 = fieldWeight in 995, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=995)
      0.14285715 = coord(2/14)
    
    Abstract
    Wie Übersetzer sollten Bibliothekare den Dialog zwischen Autoren, die bereits Werke verfasst haben, und zumeist solchen, die an neuen Werken arbeiten, vermitteln. Sie bedienen sich einer so stark reduzierten "Übersetzungssprache", dass der Dialog oft nicht mehr ausreichend gelingt. Seit zehn Jahren erweitern deshalb im deutschen und amerikanischen Bereich Bibliotheken zunehmend den Terminologieraum ihrer Kataloge durch die wichtigsten, originalsprachlichen Fachbegriffe der Autoren. Dadurch ergeben sich in der Recherche "Docking-Stellen" für terminologische Netze, die zur Query-Expansion statt Dokument-Reduktion genutzt werden können. Die sich daraus ergebende Optimierung des Recalls kann im Dialog mit einem modernen Retrieval-System mittels Facettierungstechnik hinsichtlich Precision verfeinert werden, wobei die ursprünglich oft schwer zugängliche Fachterminologie des Bibliothekars dann auch ohne ungeliebtes Vortraining entschlüsselt werden kann.

Languages

  • e 32
  • d 7

Types

  • a 36
  • el 2
  • m 2
  • More… Less…