Search (37 results, page 1 of 2)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Multilinguale Probleme"
  1. Franz, G.: ¬Die vielen Wikipedias : Vielsprachigkeit als Zugang zu einer globalisierten Online-Welt (2011) 0.02
    0.01632301 = product of:
      0.05713053 = sum of:
        0.052013997 = weight(_text_:medien in 568) [ClassicSimilarity], result of:
          0.052013997 = score(doc=568,freq=4.0), product of:
            0.17681947 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.037568163 = queryNorm
            0.29416442 = fieldWeight in 568, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.03125 = fieldNorm(doc=568)
        0.0051165326 = weight(_text_:information in 568) [ClassicSimilarity], result of:
          0.0051165326 = score(doc=568,freq=2.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.0775819 = fieldWeight in 568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=568)
      0.2857143 = coord(2/7)
    
    Abstract
    Mittlerweile sind mehr als zehn Jahre seit der Gründung der Wikipedia vergangen. Die kollaborativ zusammengestellte Online-Enzyklopädie blickt auf eine beispiellose Erfolgsgeschichte zurück und hat inzwischen sogar viele klassische Lexika das Fürchten gelehrt. Aber: Es gibt gar nicht die eine Wikipedia! Stattdessen handelt es sich bei dem Projekt um hunderte verschiedene, weitgehend voneinander unabhängig agierende Sprachversionen. Sie sind nicht nur unterschiedlich groß, sondern bestehen auch aus differierenden Inhalten. Artikel zu ein und demselben Stichwort können von Wikipedia zu Wikipedia erheblich voneinander abweichen. Von der Gemeinschaft bereits erarbeitetes Wissen steht daher nicht allen Nutzern in der Welt gleichermaßen zur Verfügung. Mit einem forcierten interlingualen Wissensaustausch ließe es sich aber für eine gegenseitige Bereicherung der Wikipedias nutzen. Das Buch gibt zunächst einen allgemeinen Überblick zur Wikipedia, geht auf ihre Entstehung, ihre Funktionsweise und auf die beteiligten Akteure ein. Auch das "Erfolgsgeheimnis" des Nachschlagewerks sowie aktuelle Herausforderungen werden herausgearbeitet. Die anschließende Untersuchung zeigt, wie sehr verschieden große Wikipedias voneinander differieren und wo die Unterschiede im Einzelnen liegen. Danach folgt eine Vorstellung von Ansätzen, Werkzeugen und Schwierigkeiten des interlingualen Wissensaustauschs zwischen den Sprachversionen. Der letzte Teil entwirft schließlich ein detailliertes Konzept für einen neuartigen Wissensaustausch, das aus mehreren ineinandergreifenden Komponenten rund um den Kern einer speziellen Übersetzungsoberfläche besteht. Das Konzept lässt sich auch als Blaupause für Lokalisierungsbemühungen multisprachlicher Wikis nutzen, wie sie international operierende Unternehmungen zunehmend einsetzen. Die Ausarbeitung, auf der dieses Buch basiert, wurde mit dem FHP-Preis 2011 für die beste Abschlussarbeit im Studiengang "Information und Dokumentation" der FH Potsdam ausgezeichnet.
    BK
    05.38 (Neue elektronische Medien)
    Classification
    05.38 (Neue elektronische Medien)
  2. Freire, N.; Charles, V.; Isaac, A.: Subject information and multilingualism in European bibliographic datasets : experiences with Universal Decimal Classification (2015) 0.01
    0.008546258 = product of:
      0.029911902 = sum of:
        0.012791331 = weight(_text_:information in 2289) [ClassicSimilarity], result of:
          0.012791331 = score(doc=2289,freq=2.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.19395474 = fieldWeight in 2289, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=2289)
        0.01712057 = product of:
          0.051361706 = sum of:
            0.051361706 = weight(_text_:29 in 2289) [ClassicSimilarity], result of:
              0.051361706 = score(doc=2289,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.38865322 = fieldWeight in 2289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2289)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro
  3. Celli, F. et al.: Enabling multilingual search through controlled vocabularies : the AGRIS approach (2016) 0.01
    0.008502255 = product of:
      0.02975789 = sum of:
        0.012791331 = weight(_text_:information in 3278) [ClassicSimilarity], result of:
          0.012791331 = score(doc=3278,freq=2.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.19395474 = fieldWeight in 3278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3278)
        0.016966559 = product of:
          0.050899673 = sum of:
            0.050899673 = weight(_text_:22 in 3278) [ClassicSimilarity], result of:
              0.050899673 = score(doc=3278,freq=2.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.38690117 = fieldWeight in 3278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3278)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  4. Frâncu, V.; Sabo, C.-N.: Implementation of a UDC-based multilingual thesaurus in a library catalogue : the case of BiblioPhil (2010) 0.01
    0.0072941524 = product of:
      0.025529532 = sum of:
        0.015349597 = weight(_text_:information in 3697) [ClassicSimilarity], result of:
          0.015349597 = score(doc=3697,freq=8.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.23274569 = fieldWeight in 3697, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3697)
        0.010179935 = product of:
          0.030539803 = sum of:
            0.030539803 = weight(_text_:22 in 3697) [ClassicSimilarity], result of:
              0.030539803 = score(doc=3697,freq=2.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.23214069 = fieldWeight in 3697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3697)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    In order to enhance the use of Universal Decimal Classification (UDC) numbers in information retrieval, the authors have represented classification with multilingual thesaurus descriptors and implemented this solution in an automated way. The authors illustrate a solution implemented in a BiblioPhil library system. The standard formats used are UNIMARC for subject authority records (i.e. the UDC-based multilingual thesaurus) and MARC XML support for data transfer. The multilingual thesaurus was built according to existing standards, the constituent parts of the classification notations being used as the basis for search terms in the multilingual information retrieval. The verbal equivalents, descriptors and non-descriptors, are used to expand the number of concepts and are given in Romanian, English and French. This approach saves the time of the indexer and provides more user-friendly and easier access to the bibliographic information. The multilingual aspect of the thesaurus enhances information access for a greater number of online users
    Date
    22. 7.2010 20:40:56
  5. De Luca, E.W.; Dahlberg, I.: Including knowledge domains from the ICC into the multilingual lexical linked data cloud (2014) 0.01
    0.007082429 = product of:
      0.024788499 = sum of:
        0.012791331 = weight(_text_:information in 1493) [ClassicSimilarity], result of:
          0.012791331 = score(doc=1493,freq=8.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.19395474 = fieldWeight in 1493, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1493)
        0.011997169 = product of:
          0.035991505 = sum of:
            0.035991505 = weight(_text_:22 in 1493) [ClassicSimilarity], result of:
              0.035991505 = score(doc=1493,freq=4.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.27358043 = fieldWeight in 1493, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1493)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    A lot of information that is already available on the Web, or retrieved from local information systems and social networks is structured in data silos that are not semantically related. Semantic technologies make it emerge that the use of typed links that directly express their relations are an advantage for every application that can reuse the incorporated knowledge about the data. For this reason, data integration, through reengineering (e.g. triplify), or querying (e.g. D2R) is an important task in order to make information available for everyone. Thus, in order to build a semantic map of the data, we need knowledge about data items itself and the relation between heterogeneous data items. In this paper, we present our work of providing Lexical Linked Data (LLD) through a meta-model that contains all the resources and gives the possibility to retrieve and navigate them from different perspectives. We combine the existing work done on knowledge domains (based on the Information Coding Classification) within the Multilingual Lexical Linked Data Cloud (based on the RDF/OWL EurowordNet and the related integrated lexical resources (MultiWordNet, EuroWordNet, MEMODATA Lexicon, Hamburg Methaphor DB).
    Date
    22. 9.2014 19:01:18
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  6. Tsai, M.-.F.; Chen, H.-H.; Wang, Y.-T.: Learning a merge model for multilingual information retrieval (2011) 0.01
    0.0065318365 = product of:
      0.022861427 = sum of:
        0.014301142 = weight(_text_:information in 2750) [ClassicSimilarity], result of:
          0.014301142 = score(doc=2750,freq=10.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.21684799 = fieldWeight in 2750, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2750)
        0.008560285 = product of:
          0.025680853 = sum of:
            0.025680853 = weight(_text_:29 in 2750) [ClassicSimilarity], result of:
              0.025680853 = score(doc=2750,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.19432661 = fieldWeight in 2750, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2750)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper proposes a learning approach for the merging process in multilingual information retrieval (MLIR). To conduct the learning approach, we present a number of features that may influence the MLIR merging process. These features are mainly extracted from three levels: query, document, and translation. After the feature extraction, we then use the FRank ranking algorithm to construct a merge model. To the best of our knowledge, this practice is the first attempt to use a learning-based ranking algorithm to construct a merge model for MLIR merging. In our experiments, three test collections for the task of crosslingual information retrieval (CLIR) in NTCIR3, 4, and 5 are employed to assess the performance of our proposed method. Moreover, several merging methods are also carried out for a comparison, including traditional merging methods, the 2-step merging strategy, and the merging method based on logistic regression. The experimental results show that our proposed method can significantly improve merging quality on two different types of datasets. In addition to the effectiveness, through the merge model generated by FRank, our method can further identify key factors that influence the merging process. This information might provide us more insight and understanding into MLIR merging.
    Date
    29. 1.2016 20:34:33
    Source
    Information processing and management. 47(2011) no.5, S.635-646
  7. Huckstorf, A.; Petras, V.: Mind the lexical gap : EuroVoc Building Block of the Semantic Web (2011) 0.01
    0.006036042 = product of:
      0.021126145 = sum of:
        0.010853804 = weight(_text_:information in 2782) [ClassicSimilarity], result of:
          0.010853804 = score(doc=2782,freq=4.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.16457605 = fieldWeight in 2782, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2782)
        0.010272342 = product of:
          0.030817024 = sum of:
            0.030817024 = weight(_text_:29 in 2782) [ClassicSimilarity], result of:
              0.030817024 = score(doc=2782,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.23319192 = fieldWeight in 2782, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2782)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Ein Konferenzereignis der besonderen Art fand am 18. und 19. November 2010 in Luxemburg statt. Initiiert durch das Amt für Veröffentlichungen der Europäischen Union (http://publications.europa.eu) waren Bibliothekare und Information Professionals eingeladen, um über die Zukunft mehrsprachiger kontrollierter Vokabulare in Informationssystemen und insbesondere deren Beitrag zum Semantic Web zu diskutieren. Organisiert wurde die Konferenz durch das EuroVoc-Team, das den Thesaurus der Europäischen Union bearbeitet. Die letzte EuroVoc-Konferenz fand im Jahr 2006 statt. In der Zwischenzeit ist EuroVoc zu einem ontologie-basierten Thesaurusmanagementsystem übergegangen und hat systematisch begonnen, Semantic-Web-Technologien für die Bearbeitung und Repräsentation einzusetzen und sich mit anderen Vokabularen zu vernetzen. Ein produktiver Austausch fand mit den Produzenten anderer europäischer und internationaler Vokabulare (z.B. United Nations oder FAO) sowie Vertretern aus Projekten, die an Themen über automatische Indexierung (hier insbesondere parlamentarische und rechtliche Dokumente) sowie Interoperabilitiät zwischen Vokabularen arbeiten, statt.
    Date
    29. 3.2013 17:46:08
    Source
    Information - Wissenschaft und Praxis. 62(2011) H.2/3, S.125-126
  8. Luca, E.W. de; Dahlberg, I.: ¬Die Multilingual Lexical Linked Data Cloud : eine mögliche Zugangsoptimierung? (2014) 0.01
    0.0060096397 = product of:
      0.021033738 = sum of:
        0.010853804 = weight(_text_:information in 1736) [ClassicSimilarity], result of:
          0.010853804 = score(doc=1736,freq=4.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.16457605 = fieldWeight in 1736, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1736)
        0.010179935 = product of:
          0.030539803 = sum of:
            0.030539803 = weight(_text_:22 in 1736) [ClassicSimilarity], result of:
              0.030539803 = score(doc=1736,freq=2.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.23214069 = fieldWeight in 1736, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1736)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Sehr viele Informationen sind bereits im Web verfügbar oder können aus isolierten strukturierten Datenspeichern wie Informationssystemen und sozialen Netzwerken gewonnen werden. Datenintegration durch Nachbearbeitung oder durch Suchmechanismen (z. B. D2R) ist deshalb wichtig, um Informationen allgemein verwendbar zu machen. Semantische Technologien ermöglichen die Verwendung definierter Verbindungen (typisierter Links), durch die ihre Beziehungen zueinander festgehalten werden, was Vorteile für jede Anwendung bietet, die das in Daten enthaltene Wissen wieder verwenden kann. Um ­eine semantische Daten-Landkarte herzustellen, benötigen wir Wissen über die einzelnen Daten und ihre Beziehung zu anderen Daten. Dieser Beitrag stellt unsere Arbeit zur Benutzung von Lexical Linked Data (LLD) durch ein Meta-Modell vor, das alle Ressourcen enthält und zudem die Möglichkeit bietet sie unter unterschiedlichen Gesichtspunkten aufzufinden. Wir verbinden damit bestehende Arbeiten über Wissensgebiete (basierend auf der Information Coding Classification) mit der Multilingual Lexical Linked Data Cloud (basierend auf der RDF/OWL-Repräsentation von EuroWordNet und den ähnlichen integrierten lexikalischen Ressourcen MultiWordNet, MEMODATA und die Hamburg Metapher DB).
    Date
    22. 9.2014 19:00:13
    Source
    Information - Wissenschaft und Praxis. 65(2014) H.4/5, S.279-287
  9. Fluhr, C.: Crosslingual access to photo databases (2012) 0.01
    0.0051013525 = product of:
      0.017854733 = sum of:
        0.0076747984 = weight(_text_:information in 93) [ClassicSimilarity], result of:
          0.0076747984 = score(doc=93,freq=2.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.116372846 = fieldWeight in 93, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=93)
        0.010179935 = product of:
          0.030539803 = sum of:
            0.030539803 = weight(_text_:22 in 93) [ClassicSimilarity], result of:
              0.030539803 = score(doc=93,freq=2.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.23214069 = fieldWeight in 93, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=93)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Date
    17. 4.2012 14:25:22
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  10. Ménard, E.: Ordinary image retrieval in a multilingual context : a comparison of two indexing vocabularies (2010) 0.00
    0.0044886637 = product of:
      0.015710322 = sum of:
        0.008862095 = weight(_text_:information in 3946) [ClassicSimilarity], result of:
          0.008862095 = score(doc=3946,freq=6.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.1343758 = fieldWeight in 3946, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3946)
        0.006848227 = product of:
          0.020544682 = sum of:
            0.020544682 = weight(_text_:29 in 3946) [ClassicSimilarity], result of:
              0.020544682 = score(doc=3946,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.15546128 = fieldWeight in 3946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3946)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - This paper seeks to examine image retrieval within two different contexts: a monolingual context where the language of the query is the same as the indexing language and a multilingual context where the language of the query is different from the indexing language. The study also aims to compare two different approaches for the indexing of ordinary images representing common objects: traditional image indexing with the use of a controlled vocabulary and free image indexing using uncontrolled vocabulary. Design/methodology/approach - This research uses three data collection methods. An analysis of the indexing terms was employed in order to examine the multiplicity of term types assigned to images. A simulation of the retrieval process involving a set of 30 images was performed with 60 participants. The quantification of the retrieval performance of each indexing approach was based on the usability measures, that is, effectiveness, efficiency and satisfaction of the user. Finally, a questionnaire was used to gather information on searcher satisfaction during and after the retrieval process. Findings - The results of this research are twofold. The analysis of indexing terms associated with all the 3,950 images provides a comprehensive description of the characteristics of the four non-combined indexing forms used for the study. Also, the retrieval simulation results offers information about the relative performance of the six indexing forms (combined and non-combined) in terms of their effectiveness, efficiency (temporal and human) and the image searcher's satisfaction. Originality/value - The findings of the study suggest that, in the near future, the information systems could benefit from allowing an increased coexistence of controlled vocabularies and uncontrolled vocabularies, resulting from collaborative image tagging, for example, and giving the users the possibility to dynamically participate in the image-indexing process, in a more user-centred way.
    Date
    29. 8.2010 10:51:07
  11. Pika, J.; Pika-Biolzi, M.: Multilingual subject access and classification-based browsing through authority control : the experience of the ETH-Bibliothek, Zürich (2015) 0.00
    0.004273129 = product of:
      0.014955951 = sum of:
        0.0063956655 = weight(_text_:information in 2295) [ClassicSimilarity], result of:
          0.0063956655 = score(doc=2295,freq=2.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.09697737 = fieldWeight in 2295, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2295)
        0.008560285 = product of:
          0.025680853 = sum of:
            0.025680853 = weight(_text_:29 in 2295) [ClassicSimilarity], result of:
              0.025680853 = score(doc=2295,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.19432661 = fieldWeight in 2295, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2295)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    The paper provides an illustration of the benefits of subject authority control improving multilingual subject access in NEBIS - Netzwerk von Bibliotheken und Informationsstellen in der Schweiz. This example of good practice focuses on some important aspects of classification and indexing. NEBIS subject authorities comprise a classification scheme and multilingual subject descriptor system. A bibliographic system supported by subject authority control empowers libraries as it enables them to expand and adjust vocabulary and link subjects to suit their specific audience. Most importantly it allows the management of different subject vocabularies in numerous languages. In addition, such an enriched subject index creates re-usable and shareable source of subject statements that has value in the wider context of information exchange. The illustrations and supporting arguments are based on indexing practice, subject authority control and use of classification in ETH-Bibliothek, which is the largest library within the NEBIS network.
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro
  12. Vassilakaki, E.; Garoufallou, E.; Johnson, F.; Hartley, R.J.: ¬An exploration of users' needs for multilingual information retrieval and access (2015) 0.00
    0.003101087 = product of:
      0.021707607 = sum of:
        0.021707607 = weight(_text_:information in 2394) [ClassicSimilarity], result of:
          0.021707607 = score(doc=2394,freq=16.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.3291521 = fieldWeight in 2394, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2394)
      0.14285715 = coord(1/7)
    
    Abstract
    The need for promoting Multilingual Information Retrieval (MLIR) and Access (MLIA) has become evident, now more than ever, given the increase of the online information produced daily in languages other than English. This study aims to explore users' information needs when searching for information across languages. Specifically, the method of questionnaire was employed to shed light on the Library and Information Science (LIS) undergraduate students' use of search engines, databases, digital libraries when searching as well as their needs for multilingual access. This study contributes in informing the design of MLIR systems by focusing on the reasons and situations under which users would search and use information in multiple languages.
    Series
    Communications in computer and information science; 544
  13. Peters, C.; Braschler, M.; Clough, P.: Multilingual information retrieval : from research to practice (2012) 0.00
    0.002830892 = product of:
      0.019816244 = sum of:
        0.019816244 = weight(_text_:information in 361) [ClassicSimilarity], result of:
          0.019816244 = score(doc=361,freq=30.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.3004734 = fieldWeight in 361, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=361)
      0.14285715 = coord(1/7)
    
    Abstract
    We are living in a multilingual world and the diversity in languages which are used to interact with information access systems has generated a wide variety of challenges to be addressed by computer and information scientists. The growing amount of non-English information accessible globally and the increased worldwide exposure of enterprises also necessitates the adaptation of Information Retrieval (IR) methods to new, multilingual settings.Peters, Braschler and Clough present a comprehensive description of the technologies involved in designing and developing systems for Multilingual Information Retrieval (MLIR). They provide readers with broad coverage of the various issues involved in creating systems to make accessible digitally stored materials regardless of the language(s) they are written in. Details on Cross-Language Information Retrieval (CLIR) are also covered that help readers to understand how to develop retrieval systems that cross language boundaries. Their work is divided into six chapters and accompanies the reader step-by-step through the various stages involved in building, using and evaluating MLIR systems. The book concludes with some examples of recent applications that utilise MLIR technologies. Some of the techniques described have recently started to appear in commercial search systems, while others have the potential to be part of future incarnations.The book is intended for graduate students, scholars, and practitioners with a basic understanding of classical text retrieval methods. It offers guidelines and information on all aspects that need to be taken into consideration when building MLIR systems, while avoiding too many 'hands-on details' that could rapidly become obsolete. Thus it bridges the gap between the material covered by most of the classical IR textbooks and the novel requirements related to the acquisition and dissemination of information in whatever language it is stored.
    Content
    Inhalt: 1 Introduction 2 Within-Language Information Retrieval 3 Cross-Language Information Retrieval 4 Interaction and User Interfaces 5 Evaluation for Multilingual Information Retrieval Systems 6 Applications of Multilingual Information Access
    RSWK
    Information-Retrieval-System / Mehrsprachigkeit / Abfrage / Zugriff
    Subject
    Information-Retrieval-System / Mehrsprachigkeit / Abfrage / Zugriff
  14. Flores, F.N.; Moreira, V.P.: Assessing the impact of stemming accuracy on information retrieval : a multilingual perspective (2016) 0.00
    0.0026856202 = product of:
      0.01879934 = sum of:
        0.01879934 = weight(_text_:information in 3187) [ClassicSimilarity], result of:
          0.01879934 = score(doc=3187,freq=12.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.2850541 = fieldWeight in 3187, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3187)
      0.14285715 = coord(1/7)
    
    Abstract
    The quality of stemming algorithms is typically measured in two different ways: (i) how accurately they map the variant forms of a word to the same stem; or (ii) how much improvement they bring to Information Retrieval systems. In this article, we evaluate various stemming algorithms, in four languages, in terms of accuracy and in terms of their aid to Information Retrieval. The aim is to assess whether the most accurate stemmers are also the ones that bring the biggest gain in Information Retrieval. Experiments in English, French, Portuguese, and Spanish show that this is not always the case, as stemmers with higher error rates yield better retrieval quality. As a byproduct, we also identified the most accurate stemmers and the best for Information Retrieval purposes.
    Source
    Information processing and management. 52(2016) no.5, S.840-854
  15. Luo, M.M.; Nahl, D.: Let's Google : uncertainty and bilingual search (2019) 0.00
    0.0026856202 = product of:
      0.01879934 = sum of:
        0.01879934 = weight(_text_:information in 5363) [ClassicSimilarity], result of:
          0.01879934 = score(doc=5363,freq=12.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.2850541 = fieldWeight in 5363, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5363)
      0.14285715 = coord(1/7)
    
    Abstract
    This study applies Kuhlthau's Information Search Process stage (ISP) model to understand bilingual users' Internet search experience. We conduct a quasi-field experiment with 30 bilingual searchers and the results suggested that the ISP model was applicable in studying searchers' information retrieval behavior in search tasks. The ISP model was applicable in studying searchers' information retrieval behavior in simple tasks. However, searchers' emotional responses differed from those of the ISP model for a complex task. By testing searchers using different search strategies, the results suggested that search engines with multilanguage search functions provide an advantage for bilingual searchers in the Internet's multilingual environment. The findings showed that when searchers used a search engine as a tool for problem solving, they might experience different feelings in each ISP stage than in searching for information for a term paper using a library. The results echo other research findings that indicate that information seeking is a multifaceted phenomenon.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.9, S.1014-1025
  16. Wang, J.; Oard, D.W.: Matching meaning for cross-language information retrieval (2012) 0.00
    0.0025582663 = product of:
      0.017907863 = sum of:
        0.017907863 = weight(_text_:information in 7430) [ClassicSimilarity], result of:
          0.017907863 = score(doc=7430,freq=8.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.27153665 = fieldWeight in 7430, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7430)
      0.14285715 = coord(1/7)
    
    Abstract
    This article describes a framework for cross-language information retrieval that efficiently leverages statistical estimation of translation probabilities. The framework provides a unified perspective into which some earlier work on techniques for cross-language information retrieval based on translation probabilities can be cast. Modeling synonymy and filtering translation probabilities using bidirectional evidence are shown to yield a balance between retrieval effectiveness and query-time (or indexing-time) efficiency that seems well suited large-scale applications. Evaluations with six test collections show consistent improvements over strong baselines.
    Source
    Information processing and management. 48(2012) no.4, S.631-653
  17. Zhou, Y. et al.: Analysing entity context in multilingual Wikipedia to support entity-centric retrieval applications (2016) 0.00
    0.0024237942 = product of:
      0.016966559 = sum of:
        0.016966559 = product of:
          0.050899673 = sum of:
            0.050899673 = weight(_text_:22 in 2758) [ClassicSimilarity], result of:
              0.050899673 = score(doc=2758,freq=2.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.38690117 = fieldWeight in 2758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2758)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Date
    1. 2.2016 18:25:22
  18. Kim, S.; Ko, Y.; Oard, D.W.: Combining lexical and statistical translation evidence for cross-language information retrieval (2015) 0.00
    0.0021927997 = product of:
      0.015349597 = sum of:
        0.015349597 = weight(_text_:information in 1606) [ClassicSimilarity], result of:
          0.015349597 = score(doc=1606,freq=8.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.23274569 = fieldWeight in 1606, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1606)
      0.14285715 = coord(1/7)
    
    Abstract
    This article explores how best to use lexical and statistical translation evidence together for cross-language information retrieval (CLIR). Lexical translation evidence is assembled from Wikipedia and from a large machine-readable dictionary, statistical translation evidence is drawn from parallel corpora, and evidence from co-occurrence in the document language provides a basis for limiting the adverse effect of translation ambiguity. Coverage statistics for NII Testbeds and Community for Information Access Research (NTCIR) queries confirm that these resources have complementary strengths. Experiments with translation evidence from a small parallel corpus indicate that even rather rough estimates of translation probabilities can yield further improvements over a strong technique for translation weighting based on using Jensen-Shannon divergence as a term-association measure. Finally, a novel approach to posttranslation query expansion using a random walk over the Wikipedia concept link graph is shown to yield further improvements over alternative techniques for posttranslation query expansion. Evaluation results on the NTCIR-5 English-Korean test collection show statistically significant improvements over strong baselines.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.23-39
  19. Stiller, J.; Király, P.: Multitlinguality of metadata : measuring the miltilingual degree of Europeana's metadata (2017) 0.00
    0.0020673913 = product of:
      0.014471739 = sum of:
        0.014471739 = weight(_text_:information in 3558) [ClassicSimilarity], result of:
          0.014471739 = score(doc=3558,freq=4.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.21943474 = fieldWeight in 3558, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3558)
      0.14285715 = coord(1/7)
    
    Source
    Everything changes, everything stays the same? - Understanding information spaces : Proceedings of the 15th International Symposium of Information Science (ISI 2017), Berlin/Germany, 13th - 15th March 2017. Eds.: M. Gäde, V. Trkulja u. V. Petras
  20. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.00
    0.0020566578 = product of:
      0.014396603 = sum of:
        0.014396603 = product of:
          0.04318981 = sum of:
            0.04318981 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.04318981 = score(doc=1967,freq=4.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.

Languages

  • e 31
  • d 6

Types

  • a 34
  • el 2
  • m 2
  • More… Less…