Search (2258 results, page 1 of 113)

  • × language_ss:"e"
  1. Seo, H.-C.; Kim, S.-B.; Rim, H.-C.; Myaeng, S.-H.: lmproving query translation in English-Korean Cross-language information retrieval (2005) 0.18
    0.18020225 = product of:
      0.3604045 = sum of:
        0.3604045 = sum of:
          0.31828925 = weight(_text_:translations in 1023) [ClassicSimilarity], result of:
            0.31828925 = score(doc=1023,freq=6.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.8398911 = fieldWeight in 1023, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=1023)
          0.042115234 = weight(_text_:22 in 1023) [ClassicSimilarity], result of:
            0.042115234 = score(doc=1023,freq=2.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.23214069 = fieldWeight in 1023, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1023)
      0.5 = coord(1/2)
    
    Abstract
    Query translation is a viable method for cross-language information retrieval (CLIR), but it suffers from translation ambiguities caused by multiple translations of individual query terms. Previous research has employed various methods for disambiguation, including the method of selecting an individual target query term from multiple candidates by comparing their statistical associations with the candidate translations of other query terms. This paper proposes a new method where we examine all combinations of target query term translations corresponding to the source query terms, instead of looking at the candidates for each query term and selecting the best one at a time. The goodness value for a combination of target query terms is computed based on the association value between each pair of the terms in the combination. We tested our method using the NTCIR-3 English-Korean CLIR test collection. The results show some improvements regardless of the association measures we used.
    Date
    26.12.2007 20:22:38
  2. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.12
    0.121662155 = product of:
      0.24332431 = sum of:
        0.24332431 = sum of:
          0.18376437 = weight(_text_:translations in 1967) [ClassicSimilarity], result of:
            0.18376437 = score(doc=1967,freq=2.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.48491132 = fieldWeight in 1967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
          0.05955994 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
            0.05955994 = score(doc=1967,freq=4.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.32829654 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  3. Alvarado, R.U.: Cataloging Pierre Bourdieu's books (1994) 0.11
    0.112939805 = product of:
      0.22587961 = sum of:
        0.22587961 = sum of:
          0.18376437 = weight(_text_:translations in 894) [ClassicSimilarity], result of:
            0.18376437 = score(doc=894,freq=2.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.48491132 = fieldWeight in 894, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=894)
          0.042115234 = weight(_text_:22 in 894) [ClassicSimilarity], result of:
            0.042115234 = score(doc=894,freq=2.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.23214069 = fieldWeight in 894, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=894)
      0.5 = coord(1/2)
    
    Abstract
    Subject headings do not always adequately express the subject content of books and other library materials. Whether due to cataloguer error or inadequacy in the authority list, this deficiency makes it difficult for users to access information. In an attempt to solve this problem, the study evaluated the adequacy of the LoC Subject Headings assigned to the 23 books of Pierre Bourdieu, whose philosophical ideas were judged likely to form a good test of the ability of the subject headings to reflect the ideas proposed by the author. The study examined the subject headings given to 22 books, and their translations into English, Spanish, Portuguese, and German, comprising 88 records in OCLC as of Dec 91. It was found that most of the books received headings not corresponding to their content, as the headings were assigned from the functionalist paradigm. In general, LCSHs ignore the conceptual categories of other paradigms, do not match the current vocabulary used by social scientists, and are ideologically biased
  4. Dabbadie, M.; Blancherie, J.M.: Alexandria, a multilingual dictionary for knowledge management purposes (2006) 0.11
    0.112939805 = product of:
      0.22587961 = sum of:
        0.22587961 = sum of:
          0.18376437 = weight(_text_:translations in 2465) [ClassicSimilarity], result of:
            0.18376437 = score(doc=2465,freq=2.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.48491132 = fieldWeight in 2465, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=2465)
          0.042115234 = weight(_text_:22 in 2465) [ClassicSimilarity], result of:
            0.042115234 = score(doc=2465,freq=2.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.23214069 = fieldWeight in 2465, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2465)
      0.5 = coord(1/2)
    
    Abstract
    Alexandria is an innovation of international impact. It is the only multilingual dictionary for websites and PCs. A double click on a word opens a small window that gives interactive translations between 22 languages and includes meaning, synonyms and associated expressions. It is an ASP application grounded on a semantic network that is portable on any operating system or platform. Behind the application is the Integral Dictionary is the semantic network created by Memodata. Alexandria can be customized with specific vocabulary, descriptive articles, images, sounds, videos, etc. Its domains of application are considerable: e-tourism, online medias, language learning, international websites. Alexandria has also proved to be a basic tool for knowledge management purposes. The application can be customized according to a user or an organization needs. An application dedicated to mobile devices is currently being developed. Future developments are planned in the field of e-tourism in relation with French "pôles de compétitivité".
  5. Yee, M.M.: What is a work? : part 2: the Anglo-American cataloging codes (1994) 0.11
    0.112939805 = product of:
      0.22587961 = sum of:
        0.22587961 = sum of:
          0.18376437 = weight(_text_:translations in 5945) [ClassicSimilarity], result of:
            0.18376437 = score(doc=5945,freq=2.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.48491132 = fieldWeight in 5945, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=5945)
          0.042115234 = weight(_text_:22 in 5945) [ClassicSimilarity], result of:
            0.042115234 = score(doc=5945,freq=2.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.23214069 = fieldWeight in 5945, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5945)
      0.5 = coord(1/2)
    
    Abstract
    Anglo-American codes are examined to determine the implicit or acting concept of work in each, in order to trace the development of our current implicit concept of work, as embodied in AACR2R. The following conditions are examined, using comparison tables: (1) contraction of a work (abridgements, condensations, digests, epitomes, outlines, chrestomathies, excerpts, extracts, selections); and (2) change in substance of a work (adaptations, dramatizations, free translations, novelizations, paraphrases, versifications, films or filmstrips of a text, musical arrangements, musical amplifications, musical settings, musical simplifications, musical transcriptions, musical versions, parodies, imitations, performances, reproductions of art works, revisions, editing, enlargements, expansion, updating, translation).
    Source
    Cataloging and classification quarterly. 19(1994) no.2, S.5-22
  6. Bagheri, M.: Development of thesauri in Iran (2006) 0.11
    0.112939805 = product of:
      0.22587961 = sum of:
        0.22587961 = sum of:
          0.18376437 = weight(_text_:translations in 260) [ClassicSimilarity], result of:
            0.18376437 = score(doc=260,freq=2.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.48491132 = fieldWeight in 260, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=260)
          0.042115234 = weight(_text_:22 in 260) [ClassicSimilarity], result of:
            0.042115234 = score(doc=260,freq=2.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.23214069 = fieldWeight in 260, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=260)
      0.5 = coord(1/2)
    
    Abstract
    The need for Persian thesauri became apparent during the late 1960s with the advent of documentation centres in Iran. The first Persian controlled vocabulary was published by IRANDOC in 1977. Other centres worked on translations of existing thesauri, but it was soon realised that these efforts did not meet the needs of the centres. After the Islamic revolution in 1979, the foundation of new centres intensified the need for Persian thesauri, especially in the fields of history and government documents. Also, during the Iran-Iraq war, Iranian research centres produced reports in scientific and technical fields, both to support military requirements and to meet society's needs. In order to provide a comprehensive thesaurus, the Council of Scientific Research of Iran approved a project for the compilation of such a work. Nowadays, 12 Persian thesauri are available and others are being prepared, based on the literary corpus and conformity with characteristics of Iranian culture.
    Source
    Indexer. 25(2006) no.1, S.19-22
  7. Musmann, K.: ¬The diffusion of knowledge across the lingustic frontier : an exmination of monographic translations (1989) 0.11
    0.1082842 = product of:
      0.2165684 = sum of:
        0.2165684 = product of:
          0.4331368 = sum of:
            0.4331368 = weight(_text_:translations in 602) [ClassicSimilarity], result of:
              0.4331368 = score(doc=602,freq=4.0), product of:
                0.3789649 = queryWeight, product of:
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.051807534 = queryNorm
                1.142947 = fieldWeight in 602, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.078125 = fieldNorm(doc=602)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents a preliminary assessment of the extent and characteristics of the translations of monographs as a form of information transfer and communication between language blocs. The study was based on statistical data provided by Unesco.
  8. Nath, I.: Machine translations : theories that make computers translation (1999) 0.11
    0.10719589 = product of:
      0.21439178 = sum of:
        0.21439178 = product of:
          0.42878357 = sum of:
            0.42878357 = weight(_text_:translations in 4420) [ClassicSimilarity], result of:
              0.42878357 = score(doc=4420,freq=2.0), product of:
                0.3789649 = queryWeight, product of:
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.051807534 = queryNorm
                1.1314598 = fieldWeight in 4420, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4420)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Slavic, A.: UDC translations : a 2004 survey report and bibliography (2004) 0.11
    0.10719589 = product of:
      0.21439178 = sum of:
        0.21439178 = product of:
          0.42878357 = sum of:
            0.42878357 = weight(_text_:translations in 3744) [ClassicSimilarity], result of:
              0.42878357 = score(doc=3744,freq=2.0), product of:
                0.3789649 = queryWeight, product of:
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.051807534 = queryNorm
                1.1314598 = fieldWeight in 3744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3744)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10334171 = sum of:
      0.08228409 = product of:
        0.24685228 = sum of:
          0.24685228 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24685228 = score(doc=562,freq=2.0), product of:
              0.43922484 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.051807534 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.021057617 = product of:
        0.042115234 = sum of:
          0.042115234 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042115234 = score(doc=562,freq=2.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  11. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2014) 0.10
    0.10138513 = product of:
      0.20277026 = sum of:
        0.20277026 = sum of:
          0.15313698 = weight(_text_:translations in 1962) [ClassicSimilarity], result of:
            0.15313698 = score(doc=1962,freq=2.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.4040928 = fieldWeight in 1962, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1962)
          0.04963328 = weight(_text_:22 in 1962) [ClassicSimilarity], result of:
            0.04963328 = score(doc=1962,freq=4.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.27358043 = fieldWeight in 1962, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1962)
      0.5 = coord(1/2)
    
    Abstract
    This article reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The article discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and/or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the Dewey Decimal Classification [DDC] (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  12. Frâncu, V.: ¬An interpretation of the FRBR model (2004) 0.10
    0.10066577 = product of:
      0.20133154 = sum of:
        0.20133154 = sum of:
          0.17325471 = weight(_text_:translations in 2647) [ClassicSimilarity], result of:
            0.17325471 = score(doc=2647,freq=4.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.4571788 = fieldWeight in 2647, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.03125 = fieldNorm(doc=2647)
          0.028076824 = weight(_text_:22 in 2647) [ClassicSimilarity], result of:
            0.028076824 = score(doc=2647,freq=2.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.15476047 = fieldWeight in 2647, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2647)
      0.5 = coord(1/2)
    
    Abstract
    Despite the existence of a logical structural model for bibliographic records which integrates any record type, library catalogues persist in offering catalogue records at the level of 'items'. Such records however, do not clearly indicate which works they contain. Hence the search possibilities of the end user are unduly limited. The Functional Requirements for Bibliographic Records (FRBR) present through a conceptual model, independent of any cataloguing code or implementation, a globalized view of the bibliographic universe. This model, a synthesis of the existing cataloguing rules, consists of clearly structured entities and well defined types of relationships among them. From a theoretical viewpoint, the model is likely to be a good knowledge organiser with great potential in identifying the author and the work represented by an item or publication and is able to link different works of the author with different editions, translations or adaptations of those works aiming at better answering the user needs. This paper is presenting an interpretation of the FRBR model opposing it to a traditional bibliographic record of a complex library material.
    Content
    1. Introduction With the diversification of the material available in library collections such as: music, film, 3D objects, cartographic material and electronic resources like CD-ROMS and Web sites, the existing cataloguing principles and codes are no longer adequate to enable the user to find, identify, select and obtain a particular entity. The problem is not only that material fails to be appropriately represented in the catalogue records but also access to such material, or parts of it, is difficult if possible at all. Consequently, the need emerged to develop new rules and build up a new conceptual model able to cope with all the requirements demanded by the existing library material. The Functional Requirements for Bibliographic Records developed by an IFLA Study Group from 1992 through 1997 present a generalised view of the bibliographic universe and are intended to be independent of any cataloguing code or implementation (Tillett, 2002). Outstanding scholars like Antonio Panizzi, Charles A. Cutter and Seymour Lubetzky formulated the basic cataloguing principles of which some can be retrieved, as Denton (2003) argues as updated versions, between the basic lines of the FRBR model: - the relation work-author groups all the works of an author - all the editions, translations, adaptations of a work are clearly separated (as expressions and manifestations) - all the expressions and manifestations of a work are collocated with their related works in bibliographic families - any document (manifestation and item) can be found if the author, title or subject of that document is known - the author is authorised by the authority control - the title is an intrinsic part of the work + authority control entity
    Date
    17. 6.2015 14:40:22
  13. Greiner-Petter, A.; Schubotz, M.; Cohl, H.S.; Gipp, B.: Semantic preserving bijective mappings for expressions involving special functions between computer algebra systems and document preparation systems (2019) 0.10
    0.10066577 = product of:
      0.20133154 = sum of:
        0.20133154 = sum of:
          0.17325471 = weight(_text_:translations in 5499) [ClassicSimilarity], result of:
            0.17325471 = score(doc=5499,freq=4.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.4571788 = fieldWeight in 5499, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.03125 = fieldNorm(doc=5499)
          0.028076824 = weight(_text_:22 in 5499) [ClassicSimilarity], result of:
            0.028076824 = score(doc=5499,freq=2.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.15476047 = fieldWeight in 5499, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5499)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Modern mathematicians and scientists of math-related disciplines often use Document Preparation Systems (DPS) to write and Computer Algebra Systems (CAS) to calculate mathematical expressions. Usually, they translate the expressions manually between DPS and CAS. This process is time-consuming and error-prone. The purpose of this paper is to automate this translation. This paper uses Maple and Mathematica as the CAS, and LaTeX as the DPS. Design/methodology/approach Bruce Miller at the National Institute of Standards and Technology (NIST) developed a collection of special LaTeX macros that create links from mathematical symbols to their definitions in the NIST Digital Library of Mathematical Functions (DLMF). The authors are using these macros to perform rule-based translations between the formulae in the DLMF and CAS. Moreover, the authors develop software to ease the creation of new rules and to discover inconsistencies. Findings The authors created 396 mappings and translated 58.8 percent of DLMF formulae (2,405 expressions) successfully between Maple and DLMF. For a significant percentage, the special function definitions in Maple and the DLMF were different. An atomic symbol in one system maps to a composite expression in the other system. The translator was also successfully used for automatic verification of mathematical online compendia and CAS. The evaluation techniques discovered two errors in the DLMF and one defect in Maple. Originality/value This paper introduces the first translation tool for special functions between LaTeX and CAS. The approach improves error-prone manual translations and can be used to verify mathematical online compendia and CAS.
    Date
    20. 1.2015 18:30:22
  14. Beall, J.: Approaches to expansions : case studies from the German and Vietnamese translations (2003) 0.09
    0.09411651 = product of:
      0.18823302 = sum of:
        0.18823302 = sum of:
          0.15313698 = weight(_text_:translations in 1748) [ClassicSimilarity], result of:
            0.15313698 = score(doc=1748,freq=2.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.4040928 = fieldWeight in 1748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1748)
          0.03509603 = weight(_text_:22 in 1748) [ClassicSimilarity], result of:
            0.03509603 = score(doc=1748,freq=2.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.19345059 = fieldWeight in 1748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1748)
      0.5 = coord(1/2)
    
    Object
    DDC-22
  15. Leazer, G.H.; Smiraglia, R.P.: Bibliographic families in the library catalog : a qualitative analysis and grounded theory (1999) 0.09
    0.09411651 = product of:
      0.18823302 = sum of:
        0.18823302 = sum of:
          0.15313698 = weight(_text_:translations in 107) [ClassicSimilarity], result of:
            0.15313698 = score(doc=107,freq=2.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.4040928 = fieldWeight in 107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.0390625 = fieldNorm(doc=107)
          0.03509603 = weight(_text_:22 in 107) [ClassicSimilarity], result of:
            0.03509603 = score(doc=107,freq=2.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.19345059 = fieldWeight in 107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=107)
      0.5 = coord(1/2)
    
    Abstract
    Forty-five years have passed since Lubetzky outlined the primary objectives of the catalog, which should facilitate the identification of specific bibliographic entities, and the explicit recoguition of works and relationships amongthem. Still, our catalogs are better designed to identify specific bibliographic entities than they are to guide users among the network of potential related editions and translations of works. In this paper, we seck to examine qualitatively some interesting examples of families of related works, defined as bibliographic families. Although the cases described here were derived from a random sample, this is a qualitative analysis. We selected these bibliographic families for their ability to reveal the strengths and weaknesses of Leazer's model, which incorporates relationship taxonomies by Tillett and Smiraglia Qualitatice analysis is intended to produce on explanation of a phenomenou, particularly an identification of any palterns observed. Patterns observed in qualitative analysis can be used to affirm external observations of the same phenomena; conclusions can contribute to what is knoton as grounded theory-a unique explanation grounded in the phenomenon under study. We arrive at two statements of grounded theory concerning bibliographic families: cataloger-generated implicit maps among works are inadequate, and qualitative analysis suggests the complexity of even the smallest bibliographic families. We conclude that user behavior study is needed to suggest which alternative maps are preferable.
    Date
    10. 9.2000 17:38:22
  16. Godby, C.J.; Smith, D.; Childress, E.: Encoding application profiles in a computational model of the crosswalk (2008) 0.09
    0.09411651 = product of:
      0.18823302 = sum of:
        0.18823302 = sum of:
          0.15313698 = weight(_text_:translations in 2649) [ClassicSimilarity], result of:
            0.15313698 = score(doc=2649,freq=2.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.4040928 = fieldWeight in 2649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2649)
          0.03509603 = weight(_text_:22 in 2649) [ClassicSimilarity], result of:
            0.03509603 = score(doc=2649,freq=2.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.19345059 = fieldWeight in 2649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2649)
      0.5 = coord(1/2)
    
    Abstract
    OCLC's Crosswalk Web Service (Godby, Smith and Childress, 2008) formalizes the notion of crosswalk, as defined in Gill,et al. (n.d.), by hiding technical details and permitting the semantic equivalences to emerge as the centerpiece. One outcome is that metadata experts, who are typically not programmers, can enter the translation logic into a spreadsheet that can be automatically converted into executable code. In this paper, we describe the implementation of the Dublin Core Terms application profile in the management of crosswalks involving MARC. A crosswalk that encodes an application profile extends the typical format with two columns: one that annotates the namespace to which an element belongs, and one that annotates a 'broader-narrower' relation between a pair of elements, such as Dublin Core coverage and Dublin Core Terms spatial. This information is sufficient to produce scripts written in OCLC's Semantic Equivalence Expression Language (or Seel), which are called from the Crosswalk Web Service to generate production-grade translations. With its focus on elements that can be mixed, matched, added, and redefined, the application profile (Heery and Patel, 2000) is a natural fit with the translation model of the Crosswalk Web Service, which attempts to achieve interoperability by mapping one pair of elements at a time.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  17. Ménard, E.; Khashman, N.; Kochkina, S.; Torres-Moreno, J.-M.; Velazquez-Morales, P.; Zhou, F.; Jourlin, P.; Rawat, P.; Peinl, P.; Linhares Pontes, E.; Brunetti., I.: ¬A second life for TIIARA : from bilingual to multilingual! (2016) 0.09
    0.09411651 = product of:
      0.18823302 = sum of:
        0.18823302 = sum of:
          0.15313698 = weight(_text_:translations in 2834) [ClassicSimilarity], result of:
            0.15313698 = score(doc=2834,freq=2.0), product of:
              0.3789649 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.051807534 = queryNorm
              0.4040928 = fieldWeight in 2834, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2834)
          0.03509603 = weight(_text_:22 in 2834) [ClassicSimilarity], result of:
            0.03509603 = score(doc=2834,freq=2.0), product of:
              0.18142116 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051807534 = queryNorm
              0.19345059 = fieldWeight in 2834, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2834)
      0.5 = coord(1/2)
    
    Abstract
    Multilingual controlled vocabularies are rare and often very limited in the choice of languages offered. TIIARA (Taxonomy for Image Indexing and RetrievAl) is a bilingual taxonomy developed for image indexing and retrieval. This controlled vocabulary offers indexers and image searchers innovative and coherent access points for ordinary images. The preliminary steps of the elaboration of the bilingual structure are presented. For its initial development, TIIARA included only two languages, French and English. As a logical follow-up, TIIARA was translated into eight languages-Arabic, Spanish, Brazilian Portuguese, Mandarin Chinese, Italian, German, Hindi and Russian-in order to increase its international scope. This paper briefly describes the different stages of the development of the bilingual structure. The processes used in the translations are subsequently presented, as well as the main difficulties encountered by the translators. Adding more languages in TIIARA constitutes an added value for a controlled vocabulary meant to be used by image searchers, who are often limited by their lack of knowledge of multiple languages.
    Source
    Knowledge organization. 43(2016) no.1, S.22-34
  18. Stark, R.: ¬The newspaper of the future (1994) 0.09
    0.091882184 = product of:
      0.18376437 = sum of:
        0.18376437 = product of:
          0.36752874 = sum of:
            0.36752874 = weight(_text_:translations in 633) [ClassicSimilarity], result of:
              0.36752874 = score(doc=633,freq=2.0), product of:
                0.3789649 = queryWeight, product of:
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.051807534 = queryNorm
                0.96982265 = fieldWeight in 633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.09375 = fieldNorm(doc=633)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes the Advanced Communications Trials project, conducted by Digithurst Ltd, UK, to develop an international full text newspaper database involving: video conferencing; data broadcasting via wireless; and online language translations
  19. Bernard, U.: Machine translation : success of failure using MT in an IT research and development environment (1996) 0.09
    0.08662736 = product of:
      0.17325471 = sum of:
        0.17325471 = product of:
          0.34650943 = sum of:
            0.34650943 = weight(_text_:translations in 8600) [ClassicSimilarity], result of:
              0.34650943 = score(doc=8600,freq=4.0), product of:
                0.3789649 = queryWeight, product of:
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.051807534 = queryNorm
                0.9143576 = fieldWeight in 8600, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8600)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Discusses the use of raw machine translations in an IT research and development environment. Researchers at the German GMD use machine translation as a drafting tool for scientific papers. The language pairs are German to English and English to German. Compares the success of raw machine translations of this material produced on an experimental basis by means of the MT systems LOGOS, METAL and Globallink Power Translator Professional. Results indicate a promising use of machine translation as a drafting tool
  20. Bruce, H.: ¬The user's view of the Internet (2002) 0.08
    0.08454932 = sum of:
      0.028079417 = product of:
        0.084238246 = sum of:
          0.084238246 = weight(_text_:object's in 4344) [ClassicSimilarity], result of:
            0.084238246 = score(doc=4344,freq=2.0), product of:
              0.5131602 = queryWeight, product of:
                9.905128 = idf(docFreq=5, maxDocs=44218)
                0.051807534 = queryNorm
              0.16415584 = fieldWeight in 4344, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                9.905128 = idf(docFreq=5, maxDocs=44218)
                0.01171875 = fieldNorm(doc=4344)
        0.33333334 = coord(1/3)
      0.056469902 = sum of:
        0.045941092 = weight(_text_:translations in 4344) [ClassicSimilarity], result of:
          0.045941092 = score(doc=4344,freq=2.0), product of:
            0.3789649 = queryWeight, product of:
              7.314861 = idf(docFreq=79, maxDocs=44218)
              0.051807534 = queryNorm
            0.12122783 = fieldWeight in 4344, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.314861 = idf(docFreq=79, maxDocs=44218)
              0.01171875 = fieldNorm(doc=4344)
        0.010528808 = weight(_text_:22 in 4344) [ClassicSimilarity], result of:
          0.010528808 = score(doc=4344,freq=2.0), product of:
            0.18142116 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051807534 = queryNorm
            0.058035173 = fieldWeight in 4344, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.01171875 = fieldNorm(doc=4344)
    
    Footnote
    Chapter 2 (Technology and People) focuses an several theories of technological acceptance and diffusion. Unfortunately, Bruce's presentation is somewhat confusing as he moves from one theory to next, never quite connecting them into a logical sequence or coherent whole. Two theories are of particular interest to Bruce: the Theory of Diffusion of Innovations and the Theory of Planned Behavior. The Theory of Diffusion of Innovations is an "information-centric view of technology acceptance" in which technology adopters are placed in the information flows of society from which they learn about innovations and "drive innovation adoption decisions" (p. 20). The Theory of Planned Behavior maintains that the "performance of a behavior is a joint function of intentions and perceived behavioral control" (i.e., how muck control a person thinks they have) (pp. 22-23). Bruce combines these two theories to form the basis for the Technology Acceptance Model. This model posits that "an individual's acceptance of information technology is based an beliefs, attitudes, intentions, and behaviors" (p. 24). In all these theories and models echoes a recurring theme: "individual perceptions of the innovation or technology are critical" in terms of both its characteristics and its use (pp. 24-25). From these, in turn, Bruce derives a predictive theory of the role personal perceptions play in technology adoption: Personal Innovativeness of Information Technology Adoption (PIITA). Personal inventiveness is defined as "the willingness of an individual to try out any new information technology" (p. 26). In general, the PIITA theory predicts that information technology will be adopted by individuals that have a greater exposure to mass media, rely less an the evaluation of information technology by others, exhibit a greater ability to cope with uncertainty and take risks, and requires a less positive perception of an information technology prior to its adoption. Chapter 3 (A Focus an Usings) introduces the User-Centered Paradigm (UCP). The UCP is characteristic of the shift of emphasis from technology to users as the driving force behind technology and research agendas for Internet development [for a dissenting view, see Andrew Dillion's (2003) challenge to the utility of user-centerness for design guidance]. It entails the "broad acceptance of the user-oriented perspective across a range of disciplines and professional fields," such as business, education, cognitive engineering, and information science (p. 34).
    The UCP's effect an business practices is focused mainly in the management and marketing areas. Marketing experienced a shift from "product-oriented operations" with its focus an "selling the products' features" and customer contact only at the point of sale toward more service-Centered business practice ("customer Jemand orientation") and the development of one-to-one customer relationships (pp. 35-36). For management, the adoption of the UCP caused a shift from "mechanistic, bureaucratic, top-down organizational structures" to "flatter, inclusive, and participative" ones (p. 37). In education, practice shifted from the teachercentered model where the "teacher is responsible for and makes all the decisions related to the learning environment" to a learnercentered model where the student is "responsible for his or her own learning" and the teacher focuses an "matching learning events to the individual skills, aptitudes, and interests of the individual learner" (pp. 38-39). Cognitive engineering saw the rise of "user-Centered design" and human factors that were concerned with applying "scientific knowledge of humans to the design of man-machine interface systems" (p. 44). The UCP had a great effect an Information Science in the "design of information systems" (p. 47). Previous to UCP's explicit proposed by Brenda Dervin and M. Nilan in 1986, systems design was dominated by the "physical of system oriented paradigm" (p. 48). The physical paradigm held a positivistic and materialistic view of technology and (passive) human interaction as exemplified by the 1953 Cranfield tests of information retrieval mechanisms. Instead, the UCP focuses an "users rather than systems" by making the perceptions of individual information users the "centerpiece consideration for information service and system design" (pp. 47-48). Bruce briefly touches an the various schools of thought within user-oriented paradigm, such as the cognitive/self studies approach with its emphasis is an an individual's knowledge structures or model of the world [e.g., Belkin (1990)], the cognitve/context studies approach that focuses an "context in explaining variations in information behavior" [e.g., Savolainen (1995) and Dervin's (1999) sensemaking], and the social constructionism/discourse analytic theory with its focus an that language, not mental/knowledge constructs, as the primary shaper of the world as a system of intersubjective meanings [e.g., Talja 1996], (pp. 53-54). Drawing from the rich tradition of user oriented research, Bruce attempts to gain a metatheoretical understanding of the Internet as a phenomena by combining Dervin's (1996) "micromoments of human usings" with the French philosopher Bruno Latour's (1999) "conception of Circulating reference" to form what 1 term the Metatheory of Circulating Usings (pp. ix, 56, 60). According to Bruce, Latour's concept is designed to bridge "the gap between mind and object" by engaging in a "succession of finely grained transformations that construct and transfer truth about the object" through a chain of "microtranslations" from "matter to form," thereby connecting mind and object (p. 56). The connection works as long as the chain remains unbroken. The nature of this chain of "information producing translations" are such that as one moves away from the object, one experiences a "reduction" of the object's "locality, particularity, materiality, multiplicity and continuity," while simultaneously gaining the "amplification" of its "compatibility, standardization, text, calculation, circulation, and relative universality" (p. 57).

Types

  • a 1981
  • m 157
  • s 101
  • el 77
  • b 31
  • r 10
  • x 8
  • i 3
  • n 3
  • p 2
  • h 1
  • More… Less…

Themes

Subjects

Classifications