Search (4240 results, page 1 of 212)

  1. Frâncu, V.: ¬An interpretation of the FRBR model (2004) 0.25
    0.2472778 = sum of:
      0.023919154 = product of:
        0.07175746 = sum of:
          0.07175746 = weight(_text_:objects in 2647) [ClassicSimilarity], result of:
            0.07175746 = score(doc=2647,freq=2.0), product of:
              0.30548716 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.05747565 = queryNorm
              0.23489517 = fieldWeight in 2647, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.03125 = fieldNorm(doc=2647)
        0.33333334 = coord(1/3)
      0.22335865 = sum of:
        0.19221002 = weight(_text_:translations in 2647) [ClassicSimilarity], result of:
          0.19221002 = score(doc=2647,freq=4.0), product of:
            0.42042637 = queryWeight, product of:
              7.314861 = idf(docFreq=79, maxDocs=44218)
              0.05747565 = queryNorm
            0.4571788 = fieldWeight in 2647, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.314861 = idf(docFreq=79, maxDocs=44218)
              0.03125 = fieldNorm(doc=2647)
        0.031148627 = weight(_text_:22 in 2647) [ClassicSimilarity], result of:
          0.031148627 = score(doc=2647,freq=2.0), product of:
            0.20126992 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.05747565 = queryNorm
            0.15476047 = fieldWeight in 2647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=2647)
    
    Abstract
    Despite the existence of a logical structural model for bibliographic records which integrates any record type, library catalogues persist in offering catalogue records at the level of 'items'. Such records however, do not clearly indicate which works they contain. Hence the search possibilities of the end user are unduly limited. The Functional Requirements for Bibliographic Records (FRBR) present through a conceptual model, independent of any cataloguing code or implementation, a globalized view of the bibliographic universe. This model, a synthesis of the existing cataloguing rules, consists of clearly structured entities and well defined types of relationships among them. From a theoretical viewpoint, the model is likely to be a good knowledge organiser with great potential in identifying the author and the work represented by an item or publication and is able to link different works of the author with different editions, translations or adaptations of those works aiming at better answering the user needs. This paper is presenting an interpretation of the FRBR model opposing it to a traditional bibliographic record of a complex library material.
    Content
    1. Introduction With the diversification of the material available in library collections such as: music, film, 3D objects, cartographic material and electronic resources like CD-ROMS and Web sites, the existing cataloguing principles and codes are no longer adequate to enable the user to find, identify, select and obtain a particular entity. The problem is not only that material fails to be appropriately represented in the catalogue records but also access to such material, or parts of it, is difficult if possible at all. Consequently, the need emerged to develop new rules and build up a new conceptual model able to cope with all the requirements demanded by the existing library material. The Functional Requirements for Bibliographic Records developed by an IFLA Study Group from 1992 through 1997 present a generalised view of the bibliographic universe and are intended to be independent of any cataloguing code or implementation (Tillett, 2002). Outstanding scholars like Antonio Panizzi, Charles A. Cutter and Seymour Lubetzky formulated the basic cataloguing principles of which some can be retrieved, as Denton (2003) argues as updated versions, between the basic lines of the FRBR model: - the relation work-author groups all the works of an author - all the editions, translations, adaptations of a work are clearly separated (as expressions and manifestations) - all the expressions and manifestations of a work are collocated with their related works in bibliographic families - any document (manifestation and item) can be found if the author, title or subject of that document is known - the author is authorised by the authority control - the title is an intrinsic part of the work + authority control entity
    Date
    17. 6.2015 14:40:22
  2. Seo, H.-C.; Kim, S.-B.; Rim, H.-C.; Myaeng, S.-H.: lmproving query translation in English-Korean Cross-language information retrieval (2005) 0.20
    0.19991766 = product of:
      0.39983532 = sum of:
        0.39983532 = sum of:
          0.35311237 = weight(_text_:translations in 1023) [ClassicSimilarity], result of:
            0.35311237 = score(doc=1023,freq=6.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.8398911 = fieldWeight in 1023, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=1023)
          0.046722937 = weight(_text_:22 in 1023) [ClassicSimilarity], result of:
            0.046722937 = score(doc=1023,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.23214069 = fieldWeight in 1023, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1023)
      0.5 = coord(1/2)
    
    Abstract
    Query translation is a viable method for cross-language information retrieval (CLIR), but it suffers from translation ambiguities caused by multiple translations of individual query terms. Previous research has employed various methods for disambiguation, including the method of selecting an individual target query term from multiple candidates by comparing their statistical associations with the candidate translations of other query terms. This paper proposes a new method where we examine all combinations of target query term translations corresponding to the source query terms, instead of looking at the candidates for each query term and selecting the best one at a time. The goodness value for a combination of target query terms is computed based on the association value between each pair of the terms in the combination. We tested our method using the NTCIR-3 English-Korean CLIR test collection. The results show some improvements regardless of the association measures we used.
    Date
    26.12.2007 20:22:38
  3. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.13
    0.13497287 = product of:
      0.26994574 = sum of:
        0.26994574 = sum of:
          0.2038695 = weight(_text_:translations in 1967) [ClassicSimilarity], result of:
            0.2038695 = score(doc=1967,freq=2.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.48491132 = fieldWeight in 1967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
          0.06607622 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
            0.06607622 = score(doc=1967,freq=4.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.32829654 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  4. Alvarado, R.U.: Cataloging Pierre Bourdieu's books (1994) 0.13
    0.12529622 = product of:
      0.25059244 = sum of:
        0.25059244 = sum of:
          0.2038695 = weight(_text_:translations in 894) [ClassicSimilarity], result of:
            0.2038695 = score(doc=894,freq=2.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.48491132 = fieldWeight in 894, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=894)
          0.046722937 = weight(_text_:22 in 894) [ClassicSimilarity], result of:
            0.046722937 = score(doc=894,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.23214069 = fieldWeight in 894, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=894)
      0.5 = coord(1/2)
    
    Abstract
    Subject headings do not always adequately express the subject content of books and other library materials. Whether due to cataloguer error or inadequacy in the authority list, this deficiency makes it difficult for users to access information. In an attempt to solve this problem, the study evaluated the adequacy of the LoC Subject Headings assigned to the 23 books of Pierre Bourdieu, whose philosophical ideas were judged likely to form a good test of the ability of the subject headings to reflect the ideas proposed by the author. The study examined the subject headings given to 22 books, and their translations into English, Spanish, Portuguese, and German, comprising 88 records in OCLC as of Dec 91. It was found that most of the books received headings not corresponding to their content, as the headings were assigned from the functionalist paradigm. In general, LCSHs ignore the conceptual categories of other paradigms, do not match the current vocabulary used by social scientists, and are ideologically biased
  5. Dabbadie, M.; Blancherie, J.M.: Alexandria, a multilingual dictionary for knowledge management purposes (2006) 0.13
    0.12529622 = product of:
      0.25059244 = sum of:
        0.25059244 = sum of:
          0.2038695 = weight(_text_:translations in 2465) [ClassicSimilarity], result of:
            0.2038695 = score(doc=2465,freq=2.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.48491132 = fieldWeight in 2465, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=2465)
          0.046722937 = weight(_text_:22 in 2465) [ClassicSimilarity], result of:
            0.046722937 = score(doc=2465,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.23214069 = fieldWeight in 2465, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2465)
      0.5 = coord(1/2)
    
    Abstract
    Alexandria is an innovation of international impact. It is the only multilingual dictionary for websites and PCs. A double click on a word opens a small window that gives interactive translations between 22 languages and includes meaning, synonyms and associated expressions. It is an ASP application grounded on a semantic network that is portable on any operating system or platform. Behind the application is the Integral Dictionary is the semantic network created by Memodata. Alexandria can be customized with specific vocabulary, descriptive articles, images, sounds, videos, etc. Its domains of application are considerable: e-tourism, online medias, language learning, international websites. Alexandria has also proved to be a basic tool for knowledge management purposes. The application can be customized according to a user or an organization needs. An application dedicated to mobile devices is currently being developed. Future developments are planned in the field of e-tourism in relation with French "pôles de compétitivité".
  6. Yee, M.M.: What is a work? : part 2: the Anglo-American cataloging codes (1994) 0.13
    0.12529622 = product of:
      0.25059244 = sum of:
        0.25059244 = sum of:
          0.2038695 = weight(_text_:translations in 5945) [ClassicSimilarity], result of:
            0.2038695 = score(doc=5945,freq=2.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.48491132 = fieldWeight in 5945, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=5945)
          0.046722937 = weight(_text_:22 in 5945) [ClassicSimilarity], result of:
            0.046722937 = score(doc=5945,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.23214069 = fieldWeight in 5945, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5945)
      0.5 = coord(1/2)
    
    Abstract
    Anglo-American codes are examined to determine the implicit or acting concept of work in each, in order to trace the development of our current implicit concept of work, as embodied in AACR2R. The following conditions are examined, using comparison tables: (1) contraction of a work (abridgements, condensations, digests, epitomes, outlines, chrestomathies, excerpts, extracts, selections); and (2) change in substance of a work (adaptations, dramatizations, free translations, novelizations, paraphrases, versifications, films or filmstrips of a text, musical arrangements, musical amplifications, musical settings, musical simplifications, musical transcriptions, musical versions, parodies, imitations, performances, reproductions of art works, revisions, editing, enlargements, expansion, updating, translation).
    Source
    Cataloging and classification quarterly. 19(1994) no.2, S.5-22
  7. Bagheri, M.: Development of thesauri in Iran (2006) 0.13
    0.12529622 = product of:
      0.25059244 = sum of:
        0.25059244 = sum of:
          0.2038695 = weight(_text_:translations in 260) [ClassicSimilarity], result of:
            0.2038695 = score(doc=260,freq=2.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.48491132 = fieldWeight in 260, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.046875 = fieldNorm(doc=260)
          0.046722937 = weight(_text_:22 in 260) [ClassicSimilarity], result of:
            0.046722937 = score(doc=260,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.23214069 = fieldWeight in 260, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=260)
      0.5 = coord(1/2)
    
    Abstract
    The need for Persian thesauri became apparent during the late 1960s with the advent of documentation centres in Iran. The first Persian controlled vocabulary was published by IRANDOC in 1977. Other centres worked on translations of existing thesauri, but it was soon realised that these efforts did not meet the needs of the centres. After the Islamic revolution in 1979, the foundation of new centres intensified the need for Persian thesauri, especially in the fields of history and government documents. Also, during the Iran-Iraq war, Iranian research centres produced reports in scientific and technical fields, both to support military requirements and to meet society's needs. In order to provide a comprehensive thesaurus, the Council of Scientific Research of Iran approved a project for the compilation of such a work. Nowadays, 12 Persian thesauri are available and others are being prepared, based on the literary corpus and conformity with characteristics of Iranian culture.
    Source
    Indexer. 25(2006) no.1, S.19-22
  8. Musmann, K.: ¬The diffusion of knowledge across the lingustic frontier : an exmination of monographic translations (1989) 0.12
    0.12013126 = product of:
      0.24026252 = sum of:
        0.24026252 = product of:
          0.48052505 = sum of:
            0.48052505 = weight(_text_:translations in 602) [ClassicSimilarity], result of:
              0.48052505 = score(doc=602,freq=4.0), product of:
                0.42042637 = queryWeight, product of:
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.05747565 = queryNorm
                1.142947 = fieldWeight in 602, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.078125 = fieldNorm(doc=602)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents a preliminary assessment of the extent and characteristics of the translations of monographs as a form of information transfer and communication between language blocs. The study was based on statistical data provided by Unesco.
  9. Nath, I.: Machine translations : theories that make computers translation (1999) 0.12
    0.11892389 = product of:
      0.23784778 = sum of:
        0.23784778 = product of:
          0.47569555 = sum of:
            0.47569555 = weight(_text_:translations in 4420) [ClassicSimilarity], result of:
              0.47569555 = score(doc=4420,freq=2.0), product of:
                0.42042637 = queryWeight, product of:
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.05747565 = queryNorm
                1.1314598 = fieldWeight in 4420, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4420)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Slavic, A.: UDC translations : a 2004 survey report and bibliography (2004) 0.12
    0.11892389 = product of:
      0.23784778 = sum of:
        0.23784778 = product of:
          0.47569555 = sum of:
            0.47569555 = weight(_text_:translations in 3744) [ClassicSimilarity], result of:
              0.47569555 = score(doc=3744,freq=2.0), product of:
                0.42042637 = queryWeight, product of:
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.05747565 = queryNorm
                1.1314598 = fieldWeight in 3744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.314861 = idf(docFreq=79, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3744)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.11
    0.11464803 = sum of:
      0.09128656 = product of:
        0.27385968 = sum of:
          0.27385968 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.27385968 = score(doc=562,freq=2.0), product of:
              0.48727918 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05747565 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.023361469 = product of:
        0.046722937 = sum of:
          0.046722937 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.046722937 = score(doc=562,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  12. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2014) 0.11
    0.11247739 = product of:
      0.22495478 = sum of:
        0.22495478 = sum of:
          0.16989127 = weight(_text_:translations in 1962) [ClassicSimilarity], result of:
            0.16989127 = score(doc=1962,freq=2.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.4040928 = fieldWeight in 1962, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1962)
          0.055063512 = weight(_text_:22 in 1962) [ClassicSimilarity], result of:
            0.055063512 = score(doc=1962,freq=4.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.27358043 = fieldWeight in 1962, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1962)
      0.5 = coord(1/2)
    
    Abstract
    This article reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The article discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and/or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the Dewey Decimal Classification [DDC] (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  13. Greiner-Petter, A.; Schubotz, M.; Cohl, H.S.; Gipp, B.: Semantic preserving bijective mappings for expressions involving special functions between computer algebra systems and document preparation systems (2019) 0.11
    0.11167932 = product of:
      0.22335865 = sum of:
        0.22335865 = sum of:
          0.19221002 = weight(_text_:translations in 5499) [ClassicSimilarity], result of:
            0.19221002 = score(doc=5499,freq=4.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.4571788 = fieldWeight in 5499, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.03125 = fieldNorm(doc=5499)
          0.031148627 = weight(_text_:22 in 5499) [ClassicSimilarity], result of:
            0.031148627 = score(doc=5499,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.15476047 = fieldWeight in 5499, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5499)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Modern mathematicians and scientists of math-related disciplines often use Document Preparation Systems (DPS) to write and Computer Algebra Systems (CAS) to calculate mathematical expressions. Usually, they translate the expressions manually between DPS and CAS. This process is time-consuming and error-prone. The purpose of this paper is to automate this translation. This paper uses Maple and Mathematica as the CAS, and LaTeX as the DPS. Design/methodology/approach Bruce Miller at the National Institute of Standards and Technology (NIST) developed a collection of special LaTeX macros that create links from mathematical symbols to their definitions in the NIST Digital Library of Mathematical Functions (DLMF). The authors are using these macros to perform rule-based translations between the formulae in the DLMF and CAS. Moreover, the authors develop software to ease the creation of new rules and to discover inconsistencies. Findings The authors created 396 mappings and translated 58.8 percent of DLMF formulae (2,405 expressions) successfully between Maple and DLMF. For a significant percentage, the special function definitions in Maple and the DLMF were different. An atomic symbol in one system maps to a composite expression in the other system. The translator was also successfully used for automatic verification of mathematical online compendia and CAS. The evaluation techniques discovered two errors in the DLMF and one defect in Maple. Originality/value This paper introduces the first translation tool for special functions between LaTeX and CAS. The approach improves error-prone manual translations and can be used to verify mathematical online compendia and CAS.
    Date
    20. 1.2015 18:30:22
  14. Dick, S.J.: Astronomy's Three Kingdom System : a comprehensive classification system of celestial objects (2019) 0.11
    0.110972084 = sum of:
      0.08371703 = product of:
        0.25115108 = sum of:
          0.25115108 = weight(_text_:objects in 5455) [ClassicSimilarity], result of:
            0.25115108 = score(doc=5455,freq=8.0), product of:
              0.30548716 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.05747565 = queryNorm
              0.82213306 = fieldWeight in 5455, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5455)
        0.33333334 = coord(1/3)
      0.027255049 = product of:
        0.054510098 = sum of:
          0.054510098 = weight(_text_:22 in 5455) [ClassicSimilarity], result of:
            0.054510098 = score(doc=5455,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.2708308 = fieldWeight in 5455, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5455)
        0.5 = coord(1/2)
    
    Abstract
    Although classification has been an important aspect of astronomy since stellar spectroscopy in the late nineteenth century, to date no comprehensive classification system has existed for all classes of objects in the universe. Here we present such a system, and lay out its foundational definitions and principles. The system consists of the "Three Kingdoms" of planets, stars and galaxies, eighteen families, and eighty-two classes of objects. Gravitation is the defining organizing principle for the families and classes, and the physical nature of the objects is the defining characteristic of the classes. The system should prove useful for both scientific and pedagogical purposes.
    Date
    21.11.2019 18:46:22
  15. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.11
    0.10597108 = product of:
      0.21194217 = sum of:
        0.21194217 = product of:
          0.31791323 = sum of:
            0.089696825 = weight(_text_:objects in 76) [ClassicSimilarity], result of:
              0.089696825 = score(doc=76,freq=2.0), product of:
                0.30548716 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05747565 = queryNorm
                0.29361898 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
            0.22821641 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.22821641 = score(doc=76,freq=2.0), product of:
                0.48727918 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05747565 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    A summary of brain theory is given so far as it is contained within the framework of Localization Theory. Difficulties of this "conventional theory" are traced back to a specific deficiency: there is no way to express relations between active cells (as for instance their representing parts of the same object). A new theory is proposed to cure this deficiency. It introduces a new kind of dynamical control, termed synaptic modulation, according to which synapses switch between a conducting and a non- conducting state. The dynamics of this variable is controlled on a fast time scale by correlations in the temporal fine structure of cellular signals. Furthermore, conventional synaptic plasticity is replaced by a refined version. Synaptic modulation and plasticity form the basis for short-term and long-term memory, respectively. Signal correlations, shaped by the variable network, express structure and relationships within objects. In particular, the figure-ground problem may be solved in this way. Synaptic modulation introduces exibility into cerebral networks which is necessary to solve the invariance problem. Since momentarily useless connections are deactivated, interference between di erent memory traces can be reduced, and memory capacity increased, in comparison with conventional associative memory
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
  16. Beall, J.: Approaches to expansions : case studies from the German and Vietnamese translations (2003) 0.10
    0.104413524 = product of:
      0.20882705 = sum of:
        0.20882705 = sum of:
          0.16989127 = weight(_text_:translations in 1748) [ClassicSimilarity], result of:
            0.16989127 = score(doc=1748,freq=2.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.4040928 = fieldWeight in 1748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1748)
          0.038935784 = weight(_text_:22 in 1748) [ClassicSimilarity], result of:
            0.038935784 = score(doc=1748,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.19345059 = fieldWeight in 1748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1748)
      0.5 = coord(1/2)
    
    Object
    DDC-22
  17. Leazer, G.H.; Smiraglia, R.P.: Bibliographic families in the library catalog : a qualitative analysis and grounded theory (1999) 0.10
    0.104413524 = product of:
      0.20882705 = sum of:
        0.20882705 = sum of:
          0.16989127 = weight(_text_:translations in 107) [ClassicSimilarity], result of:
            0.16989127 = score(doc=107,freq=2.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.4040928 = fieldWeight in 107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.0390625 = fieldNorm(doc=107)
          0.038935784 = weight(_text_:22 in 107) [ClassicSimilarity], result of:
            0.038935784 = score(doc=107,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.19345059 = fieldWeight in 107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=107)
      0.5 = coord(1/2)
    
    Abstract
    Forty-five years have passed since Lubetzky outlined the primary objectives of the catalog, which should facilitate the identification of specific bibliographic entities, and the explicit recoguition of works and relationships amongthem. Still, our catalogs are better designed to identify specific bibliographic entities than they are to guide users among the network of potential related editions and translations of works. In this paper, we seck to examine qualitatively some interesting examples of families of related works, defined as bibliographic families. Although the cases described here were derived from a random sample, this is a qualitative analysis. We selected these bibliographic families for their ability to reveal the strengths and weaknesses of Leazer's model, which incorporates relationship taxonomies by Tillett and Smiraglia Qualitatice analysis is intended to produce on explanation of a phenomenou, particularly an identification of any palterns observed. Patterns observed in qualitative analysis can be used to affirm external observations of the same phenomena; conclusions can contribute to what is knoton as grounded theory-a unique explanation grounded in the phenomenon under study. We arrive at two statements of grounded theory concerning bibliographic families: cataloger-generated implicit maps among works are inadequate, and qualitative analysis suggests the complexity of even the smallest bibliographic families. We conclude that user behavior study is needed to suggest which alternative maps are preferable.
    Date
    10. 9.2000 17:38:22
  18. Mönch, C.; Aalberg, T.: Automatic conversion from MARC to FRBR (2003) 0.10
    0.104413524 = product of:
      0.20882705 = sum of:
        0.20882705 = sum of:
          0.16989127 = weight(_text_:translations in 2422) [ClassicSimilarity], result of:
            0.16989127 = score(doc=2422,freq=2.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.4040928 = fieldWeight in 2422, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2422)
          0.038935784 = weight(_text_:22 in 2422) [ClassicSimilarity], result of:
            0.038935784 = score(doc=2422,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.19345059 = fieldWeight in 2422, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2422)
      0.5 = coord(1/2)
    
    Abstract
    Catalogs have for centuries been the main tool that enabled users to search for items in a library by author, title, or subject. A catalog can be interpreted as a set of bibliographic records, where each record acts as a surrogate for a publication. Every record describes a specific publication and contains the data that is used to create the indexes of search systems and the information that is presented to the user. Bibliographic records are often captured and exchanged by the use of the MARC format. Although there are numerous rdquodialectsrdquo of the MARC format in use, they are usually crafted on the same basis and are interoperable with each other -to a certain extent. The data model of a MARC-based catalog, however, is rdquo[...] extremely non-normalized with excessive replication of datardquo [1]. For instance, a literary work that exists in numerous editions and translations is likely to yield a large result set because each edition or translation is represented by an individual record, that is unrelated to other records that describe the same work.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
  19. Godby, C.J.; Smith, D.; Childress, E.: Encoding application profiles in a computational model of the crosswalk (2008) 0.10
    0.104413524 = product of:
      0.20882705 = sum of:
        0.20882705 = sum of:
          0.16989127 = weight(_text_:translations in 2649) [ClassicSimilarity], result of:
            0.16989127 = score(doc=2649,freq=2.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.4040928 = fieldWeight in 2649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2649)
          0.038935784 = weight(_text_:22 in 2649) [ClassicSimilarity], result of:
            0.038935784 = score(doc=2649,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.19345059 = fieldWeight in 2649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2649)
      0.5 = coord(1/2)
    
    Abstract
    OCLC's Crosswalk Web Service (Godby, Smith and Childress, 2008) formalizes the notion of crosswalk, as defined in Gill,et al. (n.d.), by hiding technical details and permitting the semantic equivalences to emerge as the centerpiece. One outcome is that metadata experts, who are typically not programmers, can enter the translation logic into a spreadsheet that can be automatically converted into executable code. In this paper, we describe the implementation of the Dublin Core Terms application profile in the management of crosswalks involving MARC. A crosswalk that encodes an application profile extends the typical format with two columns: one that annotates the namespace to which an element belongs, and one that annotates a 'broader-narrower' relation between a pair of elements, such as Dublin Core coverage and Dublin Core Terms spatial. This information is sufficient to produce scripts written in OCLC's Semantic Equivalence Expression Language (or Seel), which are called from the Crosswalk Web Service to generate production-grade translations. With its focus on elements that can be mixed, matched, added, and redefined, the application profile (Heery and Patel, 2000) is a natural fit with the translation model of the Crosswalk Web Service, which attempts to achieve interoperability by mapping one pair of elements at a time.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  20. Ménard, E.; Khashman, N.; Kochkina, S.; Torres-Moreno, J.-M.; Velazquez-Morales, P.; Zhou, F.; Jourlin, P.; Rawat, P.; Peinl, P.; Linhares Pontes, E.; Brunetti., I.: ¬A second life for TIIARA : from bilingual to multilingual! (2016) 0.10
    0.104413524 = product of:
      0.20882705 = sum of:
        0.20882705 = sum of:
          0.16989127 = weight(_text_:translations in 2834) [ClassicSimilarity], result of:
            0.16989127 = score(doc=2834,freq=2.0), product of:
              0.42042637 = queryWeight, product of:
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.05747565 = queryNorm
              0.4040928 = fieldWeight in 2834, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.314861 = idf(docFreq=79, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2834)
          0.038935784 = weight(_text_:22 in 2834) [ClassicSimilarity], result of:
            0.038935784 = score(doc=2834,freq=2.0), product of:
              0.20126992 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05747565 = queryNorm
              0.19345059 = fieldWeight in 2834, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2834)
      0.5 = coord(1/2)
    
    Abstract
    Multilingual controlled vocabularies are rare and often very limited in the choice of languages offered. TIIARA (Taxonomy for Image Indexing and RetrievAl) is a bilingual taxonomy developed for image indexing and retrieval. This controlled vocabulary offers indexers and image searchers innovative and coherent access points for ordinary images. The preliminary steps of the elaboration of the bilingual structure are presented. For its initial development, TIIARA included only two languages, French and English. As a logical follow-up, TIIARA was translated into eight languages-Arabic, Spanish, Brazilian Portuguese, Mandarin Chinese, Italian, German, Hindi and Russian-in order to increase its international scope. This paper briefly describes the different stages of the development of the bilingual structure. The processes used in the translations are subsequently presented, as well as the main difficulties encountered by the translators. Adding more languages in TIIARA constitutes an added value for a controlled vocabulary meant to be used by image searchers, who are often limited by their lack of knowledge of multiple languages.
    Source
    Knowledge organization. 43(2016) no.1, S.22-34

Languages

Types

  • a 3557
  • m 388
  • el 225
  • s 169
  • x 40
  • b 39
  • i 23
  • r 22
  • ? 8
  • n 5
  • p 4
  • d 3
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications