Search (191 results, page 1 of 10)

  • × type_ss:"el"
  1. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.09
    0.090503246 = product of:
      0.18100649 = sum of:
        0.18100649 = sum of:
          0.11981993 = weight(_text_:translation in 1967) [ClassicSimilarity], result of:
            0.11981993 = score(doc=1967,freq=2.0), product of:
              0.31015858 = queryWeight, product of:
                5.8275905 = idf(docFreq=353, maxDocs=44218)
                0.05322244 = queryNorm
              0.3863183 = fieldWeight in 1967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.8275905 = idf(docFreq=353, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
          0.061186567 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
            0.061186567 = score(doc=1967,freq=4.0), product of:
              0.18637592 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05322244 = queryNorm
              0.32829654 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  2. Priss, U.: Description logic and faceted knowledge representation (1999) 0.08
    0.081542686 = product of:
      0.16308537 = sum of:
        0.16308537 = sum of:
          0.11981993 = weight(_text_:translation in 2655) [ClassicSimilarity], result of:
            0.11981993 = score(doc=2655,freq=2.0), product of:
              0.31015858 = queryWeight, product of:
                5.8275905 = idf(docFreq=353, maxDocs=44218)
                0.05322244 = queryNorm
              0.3863183 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.8275905 = idf(docFreq=353, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
          0.043265432 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
            0.043265432 = score(doc=2655,freq=2.0), product of:
              0.18637592 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05322244 = queryNorm
              0.23214069 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
      0.5 = coord(1/2)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  3. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.070442796 = product of:
      0.14088559 = sum of:
        0.14088559 = product of:
          0.42265674 = sum of:
            0.42265674 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.42265674 = score(doc=1826,freq=2.0), product of:
                0.45122045 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05322244 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  4. Rauber, A.: Digital preservation in data-driven science : on the importance of process capture, preservation and validation (2012) 0.06
    0.05769258 = product of:
      0.11538516 = sum of:
        0.11538516 = product of:
          0.34615546 = sum of:
            0.34615546 = weight(_text_:object's in 469) [ClassicSimilarity], result of:
              0.34615546 = score(doc=469,freq=2.0), product of:
                0.52717507 = queryWeight, product of:
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.05322244 = queryNorm
                0.65662336 = fieldWeight in 469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.046875 = fieldNorm(doc=469)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Current digital preservation is strongly biased towards data objects: digital files of document-style objects, or encapsulated and largely self-contained objects. To provide authenticity and provenance information, comprehensive metadata models are deployed to document information on an object's context. Yet, we claim that simply documenting an objects context may not be sufficient to ensure proper provenance and to fulfill the stated preservation goals. Specifically in e-Science and business settings, capturing, documenting and preserving entire processes may be necessary to meet the preservation goals. We thus present an approach for capturing, documenting and preserving processes, and means to assess their authenticity upon re-execution. We will discuss options as well as limitations and open challenges to achieve sound preservation, speci?cally within scientific processes.
  5. Kuhagen, J.: RDA content in multiple languages : a new standard not only for libraries (2016) 0.06
    0.056483664 = product of:
      0.11296733 = sum of:
        0.11296733 = product of:
          0.22593465 = sum of:
            0.22593465 = weight(_text_:translation in 2955) [ClassicSimilarity], result of:
              0.22593465 = score(doc=2955,freq=4.0), product of:
                0.31015858 = queryWeight, product of:
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.05322244 = queryNorm
                0.7284488 = fieldWeight in 2955, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2955)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A summary of the presence of RDA content in languages other than English in RDA Toolkit, in the RDA Registry, in the RIMMF data editor, and as separate translations is given. Translation policy is explained and the benefits of translation on the content of RDA are noted.
  6. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.06
    0.056354232 = product of:
      0.112708464 = sum of:
        0.112708464 = product of:
          0.33812538 = sum of:
            0.33812538 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.33812538 = score(doc=230,freq=2.0), product of:
                0.45122045 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05322244 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  7. Panzer, M.: Designing identifiers for the DDC (2007) 0.05
    0.054141097 = product of:
      0.10828219 = sum of:
        0.10828219 = sum of:
          0.059909966 = weight(_text_:translation in 1752) [ClassicSimilarity], result of:
            0.059909966 = score(doc=1752,freq=2.0), product of:
              0.31015858 = queryWeight, product of:
                5.8275905 = idf(docFreq=353, maxDocs=44218)
                0.05322244 = queryNorm
              0.19315915 = fieldWeight in 1752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.8275905 = idf(docFreq=353, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1752)
          0.04837223 = weight(_text_:22 in 1752) [ClassicSimilarity], result of:
            0.04837223 = score(doc=1752,freq=10.0), product of:
              0.18637592 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05322244 = queryNorm
              0.2595412 = fieldWeight in 1752, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1752)
      0.5 = coord(1/2)
    
    Content
    "Although the Dewey Decimal Classification is currently available on the web to subscribers as WebDewey and Abridged WebDewey in the OCLC Connexion service and in an XML version to licensees, OCLC does not provide any "web services" based on the DDC. By web services, we mean presentation of the DDC to other machines (not humans) for uses such as searching, browsing, classifying, mapping, harvesting, and alerting. In order to build web-accessible services based on the DDC, several elements have to be considered. One of these elements is the design of an appropriate Uniform Resource Identifier (URI) structure for Dewey. The design goals of mapping the entity model of the DDC into an identifier space can be summarized as follows: * Common locator for Dewey concepts and associated resources for use in web services and web applications * Use-case-driven, but not directly related to and outlasting a specific use case (persistency) * Retraceable path to a concept rather than an abstract identification, reusing a means of identification that is already present in the DDC and available in existing metadata. We have been working closely with our colleagues in the OCLC Office of Research (especially Andy Houghton as well as Eric Childress, Diane Vizine-Goetz, and Stu Weibel) on a preliminary identifier syntax. The basic identifier format we are currently exploring is: http://dewey.info/{aspect}/{object}/{locale}/{type}/{version}/{resource} where * {aspect} is the aspect associated with an {object}-the current value set of aspect contains "concept", "scheme", and "index"; additional ones are under exploration * {object} is a type of {aspect} * {locale} identifies a Dewey translation * {type} identifies a Dewey edition type and contains, at a minimum, the values "edn" for the full edition or "abr" for the abridged edition * {version} identifies a Dewey edition version * {resource} identifies a resource associated with an {object} in the context of {locale}, {type}, and {version}
    Some examples of identifiers for concepts follow: <http://dewey.info/concept/338.4/en/edn/22/> This identifier is used to retrieve or identify the 338.4 concept in the English-language version of Edition 22. <http://dewey.info/concept/338.4/de/edn/22/> This identifier is used to retrieve or identify the 338.4 concept in the German-language version of Edition 22. <http://dewey.info/concept/333.7-333.9/> This identifier is used to retrieve or identify the 333.7-333.9 concept across all editions and language versions. <http://dewey.info/concept/333.7-333.9/about.skos> This identifier is used to retrieve a SKOS representation of the 333.7-333.9 concept (using the "resource" element). There are several open issues at this preliminary stage of development: Use cases: URIs need to represent the range of statements or questions that could be submitted to a Dewey web service. Therefore, it seems that some general questions have to be answered first: What information does an agent have when coming to a Dewey web service? What kind of questions will such an agent ask? Placement of the {locale} component: It is still an open question if the {locale} component should be placed after the {version} component instead (<http://dewey.info/concept/338.4/edn/22/en>) to emphasize that the most important instantiation of a Dewey class is its edition, not its language version. From a services point of view, however, it could make more sense to keep the current arrangement, because users are more likely to come to the service with a present understanding of the language version they are seeking without knowing the specifics of a certain edition in which they are trying to find topics. Identification of other Dewey entities: The goal is to create a locator that does not answer all, but a lot of questions that could be asked about the DDC. Which entities are missing but should be surfaced for services or user agents? How will those services or agents interact with them? Should some entities be rendered in a different way as presented? For example, (how) should the DDC Summaries be retrievable? Would it be necessary to make the DDC Manual accessible through this identifier structure?"
  8. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I.: Attention Is all you need (2017) 0.05
    0.05188356 = product of:
      0.10376712 = sum of:
        0.10376712 = product of:
          0.20753424 = sum of:
            0.20753424 = weight(_text_:translation in 970) [ClassicSimilarity], result of:
              0.20753424 = score(doc=970,freq=6.0), product of:
                0.31015858 = queryWeight, product of:
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.05322244 = queryNorm
                0.669123 = fieldWeight in 970, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.046875 = fieldNorm(doc=970)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
  9. Multilingual information management : current levels and future abilities. A report Commissioned by the US National Science Foundation and also delivered to the European Commission's Language Engineering Office and the US Defense Advanced Research Projects Agency, April 1999 (1999) 0.04
    0.039939977 = product of:
      0.079879954 = sum of:
        0.079879954 = product of:
          0.15975991 = sum of:
            0.15975991 = weight(_text_:translation in 6068) [ClassicSimilarity], result of:
              0.15975991 = score(doc=6068,freq=8.0), product of:
                0.31015858 = queryWeight, product of:
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.05322244 = queryNorm
                0.51509106 = fieldWeight in 6068, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6068)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Over the past 50 years, a variety of language-related capabilities has been developed in machine translation, information retrieval, speech recognition, text summarization, and so on. These applications rest upon a set of core techniques such as language modeling, information extraction, parsing, generation, and multimedia planning and integration; and they involve methods using statistics, rules, grammars, lexicons, ontologies, training techniques, and so on. It is a puzzling fact that although all of this work deals with language in some form or other, the major applications have each developed a separate research field. For example, there is no reason why speech recognition techniques involving n-grams and hidden Markov models could not have been used in machine translation 15 years earlier than they were, or why some of the lexical and semantic insights from the subarea called Computational Linguistics are still not used in information retrieval.
    This picture will rapidly change. The twin challenges of massive information overload via the web and ubiquitous computers present us with an unavoidable task: developing techniques to handle multilingual and multi-modal information robustly and efficiently, with as high quality performance as possible. The most effective way for us to address such a mammoth task, and to ensure that our various techniques and applications fit together, is to start talking across the artificial research boundaries. Extending the current technologies will require integrating the various capabilities into multi-functional and multi-lingual natural language systems. However, at this time there is no clear vision of how these technologies could or should be assembled into a coherent framework. What would be involved in connecting a speech recognition system to an information retrieval engine, and then using machine translation and summarization software to process the retrieved text? How can traditional parsing and generation be enhanced with statistical techniques? What would be the effect of carefully crafted lexicons on traditional information retrieval? At which points should machine translation be interleaved within information retrieval systems to enable multilingual processing?
  10. Mimno, D.; Crane, G.; Jones, A.: Hierarchical catalog records : implementing a FRBR catalog (2005) 0.04
    0.039939977 = product of:
      0.079879954 = sum of:
        0.079879954 = product of:
          0.15975991 = sum of:
            0.15975991 = weight(_text_:translation in 1183) [ClassicSimilarity], result of:
              0.15975991 = score(doc=1183,freq=8.0), product of:
                0.31015858 = queryWeight, product of:
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.05322244 = queryNorm
                0.51509106 = fieldWeight in 1183, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1183)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    IFLA's Functional Requirements for Bibliographic Records (FRBR) lay the foundation for a new generation of cataloging systems that recognize the difference between a particular work (e.g., Moby Dick), diverse expressions of that work (e.g., translations into German, Japanese and other languages), different versions of the same basic text (e.g., the Modern Library Classics vs. Penguin editions), and particular items (a copy of Moby Dick on the shelf). Much work has gone into finding ways to infer FRBR relationships between existing catalog records and modifying catalog interfaces to display those relationships. Relatively little work, however, has gone into exploring the creation of catalog records that are inherently based on the FRBR hierarchy of works, expressions, manifestations, and items. The Perseus Digital Library has created a new catalog that implements such a system for a small collection that includes many works with multiple versions. We have used this catalog to explore some of the implications of hierarchical catalog records for searching and browsing. Current online library catalog interfaces present many problems for searching. One commonly cited failure is the inability to find and collocate all versions of a distinct intellectual work that exist in a collection and the inability to take into account known variations in titles and personal names (Yee 2005). The IFLA Functional Requirements for Bibliographic Records (FRBR) attempts to address some of these failings by introducing the concept of multiple interrelated bibliographic entities (IFLA 1998). In particular, relationships between abstract intellectual works and the various published instances of those works are divided into a four-level hierarchy of works (such as the Aeneid), expressions (Robert Fitzgerald's translation of the Aeneid), manifestations (a particular paperback edition of Robert Fitzgerald's translation of the Aeneid), and items (my copy of a particular paperback edition of Robert Fitzgerald's translation of the Aeneid). In this formulation, each level in the hierarchy "inherits" information from the preceding level. Much of the work on FRBRized catalogs so far has focused on organizing existing records that describe individual physical books. Relatively little work has gone into rethinking what information should be in catalog records, or how the records should relate to each other. It is clear, however, that a more "native" FRBR catalog would include separate records for works, expressions, manifestations, and items. In this way, all information about a work would be centralized in one record. Records for subsequent expressions of that work would add only the information specific to each expression: Samuel Butler's translation of the Iliad does not need to repeat the fact that the work was written by Homer. This approach has certain inherent advantages for collections with many versions of the same works: new publications can be cataloged more quickly, and records can be stored and updated more efficiently.
  11. Oard, D.W.: Serving users in many languages : cross-language information retrieval for digital libraries (1997) 0.04
    0.03530229 = product of:
      0.07060458 = sum of:
        0.07060458 = product of:
          0.14120916 = sum of:
            0.14120916 = weight(_text_:translation in 1261) [ClassicSimilarity], result of:
              0.14120916 = score(doc=1261,freq=4.0), product of:
                0.31015858 = queryWeight, product of:
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.05322244 = queryNorm
                0.4552805 = fieldWeight in 1261, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1261)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We are rapidly constructing an extensive network infrastructure for moving information across national boundaries, but much remains to be done before linguistic barriers can be surmounted as effectively as geographic ones. Users seeking information from a digital library could benefit from the ability to query large collections once using a single language, even when more than one language is present in the collection. If the information they locate is not available in a language that they can read, some form of translation will be needed. At present, multilingual thesauri such as EUROVOC help to address this challenge by facilitating controlled vocabulary search using terms from several languages, and services such as INSPEC produce English abstracts for documents in other languages. On the other hand, support for free text searching across languages is not yet widely deployed, and fully automatic machine translation is presently neither sufficiently fast nor sufficiently accurate to adequately support interactive cross-language information seeking. An active and rapidly growing research community has coalesced around these and other related issues, applying techniques drawn from several fields - notably information retrieval and natural language processing - to provide access to large multilingual collections.
  12. Buttò, S.: RDA: analyses, considerations and activities by the Central Institute for the Union Catalogue of Italian Libraries and Bibliographic Information (ICCU) (2016) 0.04
    0.03530229 = product of:
      0.07060458 = sum of:
        0.07060458 = product of:
          0.14120916 = sum of:
            0.14120916 = weight(_text_:translation in 2958) [ClassicSimilarity], result of:
              0.14120916 = score(doc=2958,freq=4.0), product of:
                0.31015858 = queryWeight, product of:
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.05322244 = queryNorm
                0.4552805 = fieldWeight in 2958, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2958)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The report aims to analyze the applicability of the Resource Description and Access (RDA) within the Italian public libraries, and also in the archives and museums in order to contribute to the discussion at international level. The Central Institute for the Union Catalogue of Italian libraries (ICCU) manages the online catalogue of the Italian libraries and the network of bibliographic services. ICCU has the institutional task of coordinating the cataloging and the documentation activities for the Italian libraries. On March 31 st 2014, the Institute signed the Agreement with the American Library Association,Publishing ALA, for the Italian translation rights of RDA, now available and published inRDAToolkit. The Italian translation has been carried out and realized by the Technical Working Group, made up of the main national and academic libraries, cultural Institutions and bibliographic agencies. The Group started working from the need of studying the new code in its textual detail, to better understand the principles, purposes, and applicability and finally its sustainability within the national context in relation to the area of the bibliographic control. At international level, starting from the publication of the Italian version of RDA and through the research carried out by ICCU and by the national Working Groups, the purpose is a more direct comparison with the experiences of the other European countries, also within EURIG international context, for an exchange of experiences aimed at strengthening the informational content of the data cataloging, with respect to history, cultural traditions and national identities of the different countries.
  13. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.04
    0.035221398 = product of:
      0.070442796 = sum of:
        0.070442796 = product of:
          0.21132837 = sum of:
            0.21132837 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.21132837 = score(doc=4388,freq=2.0), product of:
                0.45122045 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05322244 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  14. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.04
    0.035221398 = product of:
      0.070442796 = sum of:
        0.070442796 = product of:
          0.21132837 = sum of:
            0.21132837 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.21132837 = score(doc=5669,freq=2.0), product of:
                0.45122045 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05322244 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  15. Koh, G.S.L.: Transferring intended messages of subject headings exemplified in the list of Korean subject headings (2006) 0.03
    0.03494748 = product of:
      0.06989496 = sum of:
        0.06989496 = product of:
          0.13978992 = sum of:
            0.13978992 = weight(_text_:translation in 6100) [ClassicSimilarity], result of:
              0.13978992 = score(doc=6100,freq=2.0), product of:
                0.31015858 = queryWeight, product of:
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.05322244 = queryNorm
                0.4507047 = fieldWeight in 6100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6100)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper focuses on meaning as the core concern and challenge of interoperability in a multilingual context. Korean subject headings, presently translated from English, crystallize issues attached to the semantics of translation in at least two languages (Korean, with written Chinese, and English). Presenting a model microcosm, which explains grammatical and semantic characteristics, and allows a search for equivalence of headings that have the closest approximation of semantic ranges, the study concludes the necessary conditions for linking multilingual subject headings and suggests an interoperable model for the transfer of meaning of headings across languages and cultures.
  16. Oard, D.W.: Alternative approaches for cross-language text retrieval (1997) 0.03
    0.03494748 = product of:
      0.06989496 = sum of:
        0.06989496 = product of:
          0.13978992 = sum of:
            0.13978992 = weight(_text_:translation in 1164) [ClassicSimilarity], result of:
              0.13978992 = score(doc=1164,freq=8.0), product of:
                0.31015858 = queryWeight, product of:
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.05322244 = queryNorm
                0.4507047 = fieldWeight in 1164, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The explosive growth of the Internet and other sources of networked information have made automatic mediation of access to networked information sources an increasingly important problem. Much of this information is expressed as electronic text, and it is becoming practical to automatically convert some printed documents and recorded speech to electronic text as well. Thus, automated systems capable of detecting useful documents are finding widespread application. With even a small number of languages it can be inconvenient to issue the same query repeatedly in every language, so users who are able to read more than one language will likely prefer a multilingual text retrieval system over a collection of monolingual systems. And since reading ability in a language does not always imply fluent writing ability in that language, such users will likely find cross-language text retrieval particularly useful for languages in which they are less confident of their ability to express their information needs effectively. The use of such systems can be also be beneficial if the user is able to read only a single language. For example, when only a small portion of the document collection will ever be examined by the user, performing retrieval before translation can be significantly more economical than performing translation before retrieval. So when the application is sufficiently important to justify the time and effort required for translation, those costs can be minimized if an effective cross-language text retrieval system is available. Even when translation is not available, there are circumstances in which cross-language text retrieval could be useful to a monolingual user. For example, a researcher might find a paper published in an unfamiliar language useful if that paper contains references to works by the same author that are in the researcher's native language.
  17. EuropeanaTech and Multilinguality : Issue 1 of EuropeanaTech Insight (2015) 0.03
    0.03458904 = product of:
      0.06917808 = sum of:
        0.06917808 = product of:
          0.13835616 = sum of:
            0.13835616 = weight(_text_:translation in 1832) [ClassicSimilarity], result of:
              0.13835616 = score(doc=1832,freq=6.0), product of:
                0.31015858 = queryWeight, product of:
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.05322244 = queryNorm
                0.446082 = fieldWeight in 1832, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1832)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Welcome to the very first issue of EuropeanaTech Insight, a multimedia publication about research and development within the EuropeanaTech community. EuropeanaTech is a very active community. It spans all of Europe and is made up of technical experts from the various disciplines within digital cultural heritage. At any given moment, members can be found presenting their work in project meetings, seminars and conferences around the world. Now, through EuropeanaTech Insight, we can share that inspiring work with the whole community. In our first three issues, we're showcasing topics discussed at the EuropeanaTech 2015 Conference, an exciting event that gave rise to lots of innovative ideas and fruitful conversations on the themes of data quality, data modelling, open data, data re-use, multilingualism and discovery. Welcome, bienvenue, bienvenido, Välkommen, Tervetuloa to the first Issue of EuropeanaTech Insight. Are we talking your language? No? Well I can guarantee you Europeana is. One of the European Union's great beauties and strengths is its diversity. That diversity is perhaps most evident in the 24 different languages spoken in the EU. Making it possible for all European citizens to easily and seamlessly communicate in their native language with others who do not speak that language is a huge technical undertaking. Translating documents, news, speeches and historical texts was once exclusively done manually. Clearly, that takes a huge amount of time and resources and means that not everything can be translated... However, with the advances in machine and automatic translation, it's becoming more possible to provide instant and pretty accurate translations. Europeana provides access to over 40 million digitised cultural heritage offering content in over 33 languages. But what value does Europeana provide if people can only find results in their native language? None. That's why the EuropeanaTech community is collectively working towards making it more possible for everyone to discover our collections in their native language. In this issue of EuropeanaTech Insight, we hear from community members who are making great strides in machine translation and enrichment tools to help improve not only access to data, but also how we retrieve, browse and understand it.
    Content
    Juliane Stiller, J.: Automatic Solutions to Improve Multilingual Access in Europeana / Vila-Suero, D. and A. Gómez-Pérez: Multilingual Linked Data / Pilos, S.: Automated Translation: Connecting Culture / Karlgren, J.: Big Data, Libraries, and Multilingual New Text / Ziedins, J.: Latvia translates with hugo.lv
  18. Wang, S.; Isaac, A.; Schopman, B.; Schlobach, S.; Meij, L. van der: Matching multilingual subject vocabularies (2009) 0.03
    0.029954983 = product of:
      0.059909966 = sum of:
        0.059909966 = product of:
          0.11981993 = sum of:
            0.11981993 = weight(_text_:translation in 3035) [ClassicSimilarity], result of:
              0.11981993 = score(doc=3035,freq=2.0), product of:
                0.31015858 = queryWeight, product of:
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.05322244 = queryNorm
                0.3863183 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3035)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Most libraries and other cultural heritage institutions use controlled knowledge organisation systems, such as thesauri, to describe their collections. Unfortunately, as most of these institutions use different such systems, united access to heterogeneous collections is difficult. Things are even worse in an international context when concepts have labels in different languages. In order to overcome the multilingual interoperability problem between European Libraries, extensive work has been done to manually map concepts from different knowledge organisation systems, which is a tedious and expensive process. Within the TELplus project, we developed and evaluated methods to automatically discover these mappings, using different ontology matching techniques. In experiments on major French, English and German subject heading lists Rameau, LCSH and SWD, we show that we can automatically produce mappings of surprisingly good quality, even when using relatively naive translation and matching methods.
  19. Advances in ontologies : Proceedings of the Sixth Australasian Ontology Workshop Adelaide, Australia, 7 December 2010 (2010) 0.03
    0.029954983 = product of:
      0.059909966 = sum of:
        0.059909966 = product of:
          0.11981993 = sum of:
            0.11981993 = weight(_text_:translation in 4420) [ClassicSimilarity], result of:
              0.11981993 = score(doc=4420,freq=2.0), product of:
                0.31015858 = queryWeight, product of:
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.05322244 = queryNorm
                0.3863183 = fieldWeight in 4420, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4420)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Inhalt YAMATO: Yet Another More Advanced Top-level Ontology (invited talk) - Riichiro Mizoguchi A Visual Analytics Approach to Augmenting Formal Concepts with Relational Background Knowledge in a Biological Domain - Elma Akand, Michael Bain, Mark Temple Combining Ontologies And Natural Language - Wolf Fischer, Bernhard Bauer Comparison of Thesauri and Ontologies from a Semiotic Perspective - Daniel Kless, Simon Milton Fast Classification in Protégé: Snorocket as an OWL2 EL Reasoner - Michael Lawley, Cyril Bousquet Ontological Support for Consistency Checking of Engineering Design Workflows - Franz Maier, Wolfgang Mayer, Markus Stumptner Ontology Inferencing Rules and Operations in Conceptual Structure Theory - Philip H.P. Nguyen, Ken Kaneiwa, Minh-Quang Nguyen An Axiomatisation of Basic Formal Ontology with Projection Functions - Kerry Trentelman, Barry Smith Making Sense of Spreadsheet Data: A Case of Semantic Water Data Translation - Yanfeng Shu, David Ratcliffe, Geoffrey Squire, Michael Compton
  20. Kiros, R.; Salakhutdinov, R.; Zemel, R.S.: Unifying visual-semantic embeddings with multimodal neural language models (2014) 0.03
    0.029954983 = product of:
      0.059909966 = sum of:
        0.059909966 = product of:
          0.11981993 = sum of:
            0.11981993 = weight(_text_:translation in 1871) [ClassicSimilarity], result of:
              0.11981993 = score(doc=1871,freq=2.0), product of:
                0.31015858 = queryWeight, product of:
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.05322244 = queryNorm
                0.3863183 = fieldWeight in 1871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.8275905 = idf(docFreq=353, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1871)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - "blue" + "red" is near images of red cars. Sample captions generated for 800 images are made available for comparison.

Years

Languages

  • e 94
  • d 88
  • a 2
  • el 2
  • i 1
  • nl 1
  • More… Less…

Types

  • a 86
  • i 10
  • m 5
  • r 3
  • s 3
  • x 3
  • b 2
  • n 1
  • p 1
  • More… Less…