Search (46 results, page 1 of 3)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Multilinguale Probleme"
  1. Fluhr, C.: Crosslingual access to photo databases (2012) 0.09
    0.090467945 = product of:
      0.1507799 = sum of:
        0.008173384 = weight(_text_:a in 93) [ClassicSimilarity], result of:
          0.008173384 = score(doc=93,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 93, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=93)
        0.095440306 = weight(_text_:91 in 93) [ClassicSimilarity], result of:
          0.095440306 = score(doc=93,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.3693884 = fieldWeight in 93, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046875 = fieldNorm(doc=93)
        0.04716621 = sum of:
          0.009472587 = weight(_text_:information in 93) [ClassicSimilarity], result of:
            0.009472587 = score(doc=93,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.116372846 = fieldWeight in 93, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=93)
          0.037693623 = weight(_text_:22 in 93) [ClassicSimilarity], result of:
            0.037693623 = score(doc=93,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 93, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=93)
      0.6 = coord(3/5)
    
    Abstract
    This paper is about search of photos in photo databases of agencies which sell photos over the Internet. The problem is far from the behavior of photo databases managed by librarians and also far from the corpora generally used for research purposes. The descriptions use mainly single words and it is well known that it is not the best way to have a good search. This increases the problem of semantic ambiguity. This problem of semantic ambiguity is crucial for cross-language querying. On the other hand, users are not aware of documentation techniques and use generally very simple queries but want to get precise answers. This paper gives the experience gained in a 3 year use (2006-2008) of a cross-language access to several of the main international commercial photo databases. The languages used were French, English, and German.
    Date
    17. 4.2012 14:25:22
    Pages
    S.78-91
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
    Type
    a
  2. Celli, F. et al.: Enabling multilingual search through controlled vocabularies : the AGRIS approach (2016) 0.03
    0.0341686 = product of:
      0.0854215 = sum of:
        0.0068111527 = weight(_text_:a in 3278) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=3278,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 3278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=3278)
        0.07861035 = sum of:
          0.015787644 = weight(_text_:information in 3278) [ClassicSimilarity], result of:
            0.015787644 = score(doc=3278,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.19395474 = fieldWeight in 3278, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.078125 = fieldNorm(doc=3278)
          0.06282271 = weight(_text_:22 in 3278) [ClassicSimilarity], result of:
            0.06282271 = score(doc=3278,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.38690117 = fieldWeight in 3278, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3278)
      0.4 = coord(2/5)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
    Type
    a
  3. De Luca, E.W.; Dahlberg, I.: Including knowledge domains from the ICC into the multilingual lexical linked data cloud (2014) 0.03
    0.026808461 = product of:
      0.067021154 = sum of:
        0.0068111527 = weight(_text_:a in 1493) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=1493,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 1493, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1493)
        0.060210004 = sum of:
          0.015787644 = weight(_text_:information in 1493) [ClassicSimilarity], result of:
            0.015787644 = score(doc=1493,freq=8.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.19395474 = fieldWeight in 1493, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1493)
          0.044422362 = weight(_text_:22 in 1493) [ClassicSimilarity], result of:
            0.044422362 = score(doc=1493,freq=4.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.27358043 = fieldWeight in 1493, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1493)
      0.4 = coord(2/5)
    
    Abstract
    A lot of information that is already available on the Web, or retrieved from local information systems and social networks is structured in data silos that are not semantically related. Semantic technologies make it emerge that the use of typed links that directly express their relations are an advantage for every application that can reuse the incorporated knowledge about the data. For this reason, data integration, through reengineering (e.g. triplify), or querying (e.g. D2R) is an important task in order to make information available for everyone. Thus, in order to build a semantic map of the data, we need knowledge about data items itself and the relation between heterogeneous data items. In this paper, we present our work of providing Lexical Linked Data (LLD) through a meta-model that contains all the resources and gives the possibility to retrieve and navigate them from different perspectives. We combine the existing work done on knowledge domains (based on the Information Coding Classification) within the Multilingual Lexical Linked Data Cloud (based on the RDF/OWL EurowordNet and the related integrated lexical resources (MultiWordNet, EuroWordNet, MEMODATA Lexicon, Hamburg Methaphor DB).
    Date
    22. 9.2014 19:01:18
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  4. Frâncu, V.; Sabo, C.-N.: Implementation of a UDC-based multilingual thesaurus in a library catalogue : the case of BiblioPhil (2010) 0.03
    0.026659643 = product of:
      0.06664911 = sum of:
        0.0100103095 = weight(_text_:a in 3697) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=3697,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 3697, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3697)
        0.0566388 = sum of:
          0.018945174 = weight(_text_:information in 3697) [ClassicSimilarity], result of:
            0.018945174 = score(doc=3697,freq=8.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.23274569 = fieldWeight in 3697, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=3697)
          0.037693623 = weight(_text_:22 in 3697) [ClassicSimilarity], result of:
            0.037693623 = score(doc=3697,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 3697, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3697)
      0.4 = coord(2/5)
    
    Abstract
    In order to enhance the use of Universal Decimal Classification (UDC) numbers in information retrieval, the authors have represented classification with multilingual thesaurus descriptors and implemented this solution in an automated way. The authors illustrate a solution implemented in a BiblioPhil library system. The standard formats used are UNIMARC for subject authority records (i.e. the UDC-based multilingual thesaurus) and MARC XML support for data transfer. The multilingual thesaurus was built according to existing standards, the constituent parts of the classification notations being used as the basis for search terms in the multilingual information retrieval. The verbal equivalents, descriptors and non-descriptors, are used to expand the number of concepts and are given in Romanian, English and French. This approach saves the time of the indexer and provides more user-friendly and easier access to the bibliographic information. The multilingual aspect of the thesaurus enhances information access for a greater number of online users
    Date
    22. 7.2010 20:40:56
    Type
    a
  5. Luca, E.W. de; Dahlberg, I.: ¬Die Multilingual Lexical Linked Data Cloud : eine mögliche Zugangsoptimierung? (2014) 0.02
    0.02207063 = product of:
      0.055176575 = sum of:
        0.004086692 = weight(_text_:a in 1736) [ClassicSimilarity], result of:
          0.004086692 = score(doc=1736,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 1736, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1736)
        0.051089883 = sum of:
          0.013396261 = weight(_text_:information in 1736) [ClassicSimilarity], result of:
            0.013396261 = score(doc=1736,freq=4.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.16457605 = fieldWeight in 1736, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=1736)
          0.037693623 = weight(_text_:22 in 1736) [ClassicSimilarity], result of:
            0.037693623 = score(doc=1736,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 1736, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1736)
      0.4 = coord(2/5)
    
    Abstract
    Sehr viele Informationen sind bereits im Web verfügbar oder können aus isolierten strukturierten Datenspeichern wie Informationssystemen und sozialen Netzwerken gewonnen werden. Datenintegration durch Nachbearbeitung oder durch Suchmechanismen (z. B. D2R) ist deshalb wichtig, um Informationen allgemein verwendbar zu machen. Semantische Technologien ermöglichen die Verwendung definierter Verbindungen (typisierter Links), durch die ihre Beziehungen zueinander festgehalten werden, was Vorteile für jede Anwendung bietet, die das in Daten enthaltene Wissen wieder verwenden kann. Um ­eine semantische Daten-Landkarte herzustellen, benötigen wir Wissen über die einzelnen Daten und ihre Beziehung zu anderen Daten. Dieser Beitrag stellt unsere Arbeit zur Benutzung von Lexical Linked Data (LLD) durch ein Meta-Modell vor, das alle Ressourcen enthält und zudem die Möglichkeit bietet sie unter unterschiedlichen Gesichtspunkten aufzufinden. Wir verbinden damit bestehende Arbeiten über Wissensgebiete (basierend auf der Information Coding Classification) mit der Multilingual Lexical Linked Data Cloud (basierend auf der RDF/OWL-Repräsentation von EuroWordNet und den ähnlichen integrierten lexikalischen Ressourcen MultiWordNet, MEMODATA und die Hamburg Metapher DB).
    Date
    22. 9.2014 19:00:13
    Source
    Information - Wissenschaft und Praxis. 65(2014) H.4/5, S.279-287
    Type
    a
  6. Zhou, Y. et al.: Analysing entity context in multilingual Wikipedia to support entity-centric retrieval applications (2016) 0.02
    0.015289003 = product of:
      0.038222507 = sum of:
        0.0068111527 = weight(_text_:a in 2758) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=2758,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 2758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2758)
        0.031411353 = product of:
          0.06282271 = sum of:
            0.06282271 = weight(_text_:22 in 2758) [ClassicSimilarity], result of:
              0.06282271 = score(doc=2758,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.38690117 = fieldWeight in 2758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2758)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    1. 2.2016 18:25:22
    Type
    a
  7. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.01
    0.012973151 = product of:
      0.032432877 = sum of:
        0.005779455 = weight(_text_:a in 1967) [ClassicSimilarity], result of:
          0.005779455 = score(doc=1967,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 1967, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.02665342 = product of:
          0.05330684 = sum of:
            0.05330684 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.05330684 = score(doc=1967,freq=4.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Type
    a
  8. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2014) 0.01
    0.011243925 = product of:
      0.028109813 = sum of:
        0.005898632 = weight(_text_:a in 1962) [ClassicSimilarity], result of:
          0.005898632 = score(doc=1962,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11032722 = fieldWeight in 1962, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1962)
        0.022211181 = product of:
          0.044422362 = sum of:
            0.044422362 = weight(_text_:22 in 1962) [ClassicSimilarity], result of:
              0.044422362 = score(doc=1962,freq=4.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27358043 = fieldWeight in 1962, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1962)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The article discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and/or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the Dewey Decimal Classification [DDC] (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Footnote
    Contribution in a special issue "Beyond libraries: Subject metadata in the digital environment and Semantic Web" - Enthält Beiträge der gleichnamigen IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn.
    Type
    a
  9. Ménard, E.; Khashman, N.; Kochkina, S.; Torres-Moreno, J.-M.; Velazquez-Morales, P.; Zhou, F.; Jourlin, P.; Rawat, P.; Peinl, P.; Linhares Pontes, E.; Brunetti., I.: ¬A second life for TIIARA : from bilingual to multilingual! (2016) 0.01
    0.009328311 = product of:
      0.023320777 = sum of:
        0.0076151006 = weight(_text_:a in 2834) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=2834,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 2834, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2834)
        0.015705677 = product of:
          0.031411353 = sum of:
            0.031411353 = weight(_text_:22 in 2834) [ClassicSimilarity], result of:
              0.031411353 = score(doc=2834,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19345059 = fieldWeight in 2834, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2834)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Multilingual controlled vocabularies are rare and often very limited in the choice of languages offered. TIIARA (Taxonomy for Image Indexing and RetrievAl) is a bilingual taxonomy developed for image indexing and retrieval. This controlled vocabulary offers indexers and image searchers innovative and coherent access points for ordinary images. The preliminary steps of the elaboration of the bilingual structure are presented. For its initial development, TIIARA included only two languages, French and English. As a logical follow-up, TIIARA was translated into eight languages-Arabic, Spanish, Brazilian Portuguese, Mandarin Chinese, Italian, German, Hindi and Russian-in order to increase its international scope. This paper briefly describes the different stages of the development of the bilingual structure. The processes used in the translations are subsequently presented, as well as the main difficulties encountered by the translators. Adding more languages in TIIARA constitutes an added value for a controlled vocabulary meant to be used by image searchers, who are often limited by their lack of knowledge of multiple languages.
    Source
    Knowledge organization. 43(2016) no.1, S.22-34
    Type
    a
  10. Luo, M.M.; Nahl, D.: Let's Google : uncertainty and bilingual search (2019) 0.01
    0.009264166 = product of:
      0.023160413 = sum of:
        0.01155891 = weight(_text_:a in 5363) [ClassicSimilarity], result of:
          0.01155891 = score(doc=5363,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.2161963 = fieldWeight in 5363, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5363)
        0.011601503 = product of:
          0.023203006 = sum of:
            0.023203006 = weight(_text_:information in 5363) [ClassicSimilarity], result of:
              0.023203006 = score(doc=5363,freq=12.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2850541 = fieldWeight in 5363, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5363)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This study applies Kuhlthau's Information Search Process stage (ISP) model to understand bilingual users' Internet search experience. We conduct a quasi-field experiment with 30 bilingual searchers and the results suggested that the ISP model was applicable in studying searchers' information retrieval behavior in search tasks. The ISP model was applicable in studying searchers' information retrieval behavior in simple tasks. However, searchers' emotional responses differed from those of the ISP model for a complex task. By testing searchers using different search strategies, the results suggested that search engines with multilanguage search functions provide an advantage for bilingual searchers in the Internet's multilingual environment. The findings showed that when searchers used a search engine as a tool for problem solving, they might experience different feelings in each ISP stage than in searching for information for a term paper using a library. The results echo other research findings that indicate that information seeking is a multifaceted phenomenon.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.9, S.1014-1025
    Type
    a
  11. Rettinger, A.; Schumilin, A.; Thoma, S.; Ell, B.: Learning a cross-lingual semantic representation of relations expressed in text (2015) 0.01
    0.008606452 = product of:
      0.021516128 = sum of:
        0.013622305 = weight(_text_:a in 2027) [ClassicSimilarity], result of:
          0.013622305 = score(doc=2027,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.25478977 = fieldWeight in 2027, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2027)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 2027) [ClassicSimilarity], result of:
              0.015787644 = score(doc=2027,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 2027, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2027)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Series
    Information Systems and Applications, incl. Internet/Web, and HCI; Bd. 9088
    Type
    a
  12. Kim, S.; Ko, Y.; Oard, D.W.: Combining lexical and statistical translation evidence for cross-language information retrieval (2015) 0.01
    0.008412599 = product of:
      0.021031497 = sum of:
        0.01155891 = weight(_text_:a in 1606) [ClassicSimilarity], result of:
          0.01155891 = score(doc=1606,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.2161963 = fieldWeight in 1606, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1606)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 1606) [ClassicSimilarity], result of:
              0.018945174 = score(doc=1606,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 1606, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1606)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article explores how best to use lexical and statistical translation evidence together for cross-language information retrieval (CLIR). Lexical translation evidence is assembled from Wikipedia and from a large machine-readable dictionary, statistical translation evidence is drawn from parallel corpora, and evidence from co-occurrence in the document language provides a basis for limiting the adverse effect of translation ambiguity. Coverage statistics for NII Testbeds and Community for Information Access Research (NTCIR) queries confirm that these resources have complementary strengths. Experiments with translation evidence from a small parallel corpus indicate that even rather rough estimates of translation probabilities can yield further improvements over a strong technique for translation weighting based on using Jensen-Shannon divergence as a term-association measure. Finally, a novel approach to posttranslation query expansion using a random walk over the Wikipedia concept link graph is shown to yield further improvements over alternative techniques for posttranslation query expansion. Evaluation results on the NTCIR-5 English-Korean test collection show statistically significant improvements over strong baselines.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.23-39
    Type
    a
  13. Wang, J.; Oard, D.W.: Matching meaning for cross-language information retrieval (2012) 0.01
    0.008234787 = product of:
      0.020586967 = sum of:
        0.009535614 = weight(_text_:a in 7430) [ClassicSimilarity], result of:
          0.009535614 = score(doc=7430,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 7430, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7430)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 7430) [ClassicSimilarity], result of:
              0.022102704 = score(doc=7430,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 7430, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7430)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article describes a framework for cross-language information retrieval that efficiently leverages statistical estimation of translation probabilities. The framework provides a unified perspective into which some earlier work on techniques for cross-language information retrieval based on translation probabilities can be cast. Modeling synonymy and filtering translation probabilities using bidirectional evidence are shown to yield a balance between retrieval effectiveness and query-time (or indexing-time) efficiency that seems well suited large-scale applications. Evaluations with six test collections show consistent improvements over strong baselines.
    Source
    Information processing and management. 48(2012) no.4, S.631-653
    Type
    a
  14. Flores, F.N.; Moreira, V.P.: Assessing the impact of stemming accuracy on information retrieval : a multilingual perspective (2016) 0.01
    0.007909955 = product of:
      0.019774888 = sum of:
        0.008173384 = weight(_text_:a in 3187) [ClassicSimilarity], result of:
          0.008173384 = score(doc=3187,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 3187, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3187)
        0.011601503 = product of:
          0.023203006 = sum of:
            0.023203006 = weight(_text_:information in 3187) [ClassicSimilarity], result of:
              0.023203006 = score(doc=3187,freq=12.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2850541 = fieldWeight in 3187, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3187)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The quality of stemming algorithms is typically measured in two different ways: (i) how accurately they map the variant forms of a word to the same stem; or (ii) how much improvement they bring to Information Retrieval systems. In this article, we evaluate various stemming algorithms, in four languages, in terms of accuracy and in terms of their aid to Information Retrieval. The aim is to assess whether the most accurate stemmers are also the ones that bring the biggest gain in Information Retrieval. Experiments in English, French, Portuguese, and Spanish show that this is not always the case, as stemmers with higher error rates yield better retrieval quality. As a byproduct, we also identified the most accurate stemmers and the best for Information Retrieval purposes.
    Source
    Information processing and management. 52(2016) no.5, S.840-854
    Type
    a
  15. Freire, N.; Charles, V.; Isaac, A.: Subject information and multilingualism in European bibliographic datasets : experiences with Universal Decimal Classification (2015) 0.01
    0.007876435 = product of:
      0.019691087 = sum of:
        0.011797264 = weight(_text_:a in 2289) [ClassicSimilarity], result of:
          0.011797264 = score(doc=2289,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22065444 = fieldWeight in 2289, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2289)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 2289) [ClassicSimilarity], result of:
              0.015787644 = score(doc=2289,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 2289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2289)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro
    Type
    a
  16. Hubrich, J.: Multilinguale Wissensorganisation im Zeitalter der Globalisierung : das Projekt CrissCross (2010) 0.01
    0.0076445015 = product of:
      0.019111253 = sum of:
        0.0034055763 = weight(_text_:a in 4793) [ClassicSimilarity], result of:
          0.0034055763 = score(doc=4793,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.06369744 = fieldWeight in 4793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4793)
        0.015705677 = product of:
          0.031411353 = sum of:
            0.031411353 = weight(_text_:22 in 4793) [ClassicSimilarity], result of:
              0.031411353 = score(doc=4793,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19345059 = fieldWeight in 4793, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4793)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Type
    a
  17. Tsai, M.-.F.; Chen, H.-H.; Wang, Y.-T.: Learning a merge model for multilingual information retrieval (2011) 0.01
    0.0073831948 = product of:
      0.018457986 = sum of:
        0.009632425 = weight(_text_:a in 2750) [ClassicSimilarity], result of:
          0.009632425 = score(doc=2750,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 2750, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2750)
        0.008825562 = product of:
          0.017651124 = sum of:
            0.017651124 = weight(_text_:information in 2750) [ClassicSimilarity], result of:
              0.017651124 = score(doc=2750,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21684799 = fieldWeight in 2750, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2750)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper proposes a learning approach for the merging process in multilingual information retrieval (MLIR). To conduct the learning approach, we present a number of features that may influence the MLIR merging process. These features are mainly extracted from three levels: query, document, and translation. After the feature extraction, we then use the FRank ranking algorithm to construct a merge model. To the best of our knowledge, this practice is the first attempt to use a learning-based ranking algorithm to construct a merge model for MLIR merging. In our experiments, three test collections for the task of crosslingual information retrieval (CLIR) in NTCIR3, 4, and 5 are employed to assess the performance of our proposed method. Moreover, several merging methods are also carried out for a comparison, including traditional merging methods, the 2-step merging strategy, and the merging method based on logistic regression. The experimental results show that our proposed method can significantly improve merging quality on two different types of datasets. In addition to the effectiveness, through the merge model generated by FRank, our method can further identify key factors that influence the merging process. This information might provide us more insight and understanding into MLIR merging.
    Source
    Information processing and management. 47(2011) no.5, S.635-646
    Type
    a
  18. Peters, C.; Braschler, M.; Clough, P.: Multilingual information retrieval : from research to practice (2012) 0.01
    0.007071192 = product of:
      0.01767798 = sum of:
        0.005448922 = weight(_text_:a in 361) [ClassicSimilarity], result of:
          0.005448922 = score(doc=361,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 361, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=361)
        0.012229058 = product of:
          0.024458116 = sum of:
            0.024458116 = weight(_text_:information in 361) [ClassicSimilarity], result of:
              0.024458116 = score(doc=361,freq=30.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3004734 = fieldWeight in 361, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=361)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We are living in a multilingual world and the diversity in languages which are used to interact with information access systems has generated a wide variety of challenges to be addressed by computer and information scientists. The growing amount of non-English information accessible globally and the increased worldwide exposure of enterprises also necessitates the adaptation of Information Retrieval (IR) methods to new, multilingual settings.Peters, Braschler and Clough present a comprehensive description of the technologies involved in designing and developing systems for Multilingual Information Retrieval (MLIR). They provide readers with broad coverage of the various issues involved in creating systems to make accessible digitally stored materials regardless of the language(s) they are written in. Details on Cross-Language Information Retrieval (CLIR) are also covered that help readers to understand how to develop retrieval systems that cross language boundaries. Their work is divided into six chapters and accompanies the reader step-by-step through the various stages involved in building, using and evaluating MLIR systems. The book concludes with some examples of recent applications that utilise MLIR technologies. Some of the techniques described have recently started to appear in commercial search systems, while others have the potential to be part of future incarnations.The book is intended for graduate students, scholars, and practitioners with a basic understanding of classical text retrieval methods. It offers guidelines and information on all aspects that need to be taken into consideration when building MLIR systems, while avoiding too many 'hands-on details' that could rapidly become obsolete. Thus it bridges the gap between the material covered by most of the classical IR textbooks and the novel requirements related to the acquisition and dissemination of information in whatever language it is stored.
    Content
    Inhalt: 1 Introduction 2 Within-Language Information Retrieval 3 Cross-Language Information Retrieval 4 Interaction and User Interfaces 5 Evaluation for Multilingual Information Retrieval Systems 6 Applications of Multilingual Information Access
    RSWK
    Information-Retrieval-System / Mehrsprachigkeit / Abfrage / Zugriff
    Subject
    Information-Retrieval-System / Mehrsprachigkeit / Abfrage / Zugriff
  19. Yu, L.-C.; Wu, C.-H.; Chang, R.-Y.; Liu, C.-H.; Hovy, E.H.: Annotation and verification of sense pools in OntoNotes (2010) 0.01
    0.0070422525 = product of:
      0.01760563 = sum of:
        0.010769378 = weight(_text_:a in 4236) [ClassicSimilarity], result of:
          0.010769378 = score(doc=4236,freq=20.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20142901 = fieldWeight in 4236, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4236)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 4236) [ClassicSimilarity], result of:
              0.013672504 = score(doc=4236,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 4236, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4236)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The paper describes the OntoNotes, a multilingual (English, Chinese and Arabic) corpus with large-scale semantic annotations, including predicate-argument structure, word senses, ontology linking, and coreference. The underlying semantic model of OntoNotes involves word senses that are grouped into so-called sense pools, i.e., sets of near-synonymous senses of words. Such information is useful for many applications, including query expansion for information retrieval (IR) systems, (near-)duplicate detection for text summarization systems, and alternative word selection for writing support systems. Although a sense pool provides a set of near-synonymous senses of words, there is still no knowledge about whether two words in a pool are interchangeable in practical use. Therefore, this paper devises an unsupervised algorithm that incorporates Google n-grams and a statistical test to determine whether a word in a pool can be substituted by other words in the same pool. The n-gram features are used to measure the degree of context mismatch for a substitution. The statistical test is then applied to determine whether the substitution is adequate based on the degree of mismatch. The proposed method is compared with a supervised method, namely Linear Discriminant Analysis (LDA). Experimental results show that the proposed unsupervised method can achieve comparable performance with the supervised method.
    Source
    Information processing and management. 46(2010) no.4, S.436-447
    Type
    a
  20. Gupta, P.; Banchs, R.E.; Rosso, P.: Continuous space models for CLIR (2017) 0.01
    0.007004201 = product of:
      0.017510502 = sum of:
        0.010812371 = weight(_text_:a in 3295) [ClassicSimilarity], result of:
          0.010812371 = score(doc=3295,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20223314 = fieldWeight in 3295, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3295)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 3295) [ClassicSimilarity], result of:
              0.013396261 = score(doc=3295,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 3295, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3295)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We present and evaluate a novel technique for learning cross-lingual continuous space models to aid cross-language information retrieval (CLIR). Our model, which is referred to as external-data composition neural network (XCNN), is based on a composition function that is implemented on top of a deep neural network that provides a distributed learning framework. Different from most existing models, which rely only on available parallel data for training, our learning framework provides a natural way to exploit monolingual data and its associated relevance metadata for learning continuous space representations of language. Cross-language extensions of the obtained models can then be trained by using a small set of parallel data. This property is very helpful for resource-poor languages, therefore, we carry out experiments on the English-Hindi language pair. On the conducted comparative evaluation, the proposed model is shown to outperform state-of-the-art continuous space models with statistically significant margin on two different tasks: parallel sentence retrieval and ad-hoc retrieval.
    Source
    Information processing and management. 53(2017) no.2, S.359-370
    Type
    a

Languages

  • e 35
  • d 11

Types

  • a 43
  • el 5
  • m 2
  • More… Less…