Search (313 results, page 1 of 16)

  • × type_ss:"el"
  1. OWL Web Ontology Language Test Cases (2004) 0.09
    0.089047045 = product of:
      0.17809409 = sum of:
        0.17809409 = sum of:
          0.1219849 = weight(_text_:language in 4685) [ClassicSimilarity], result of:
            0.1219849 = score(doc=4685,freq=6.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.60062915 = fieldWeight in 4685, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0625 = fieldNorm(doc=4685)
          0.056109186 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
            0.056109186 = score(doc=4685,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.30952093 = fieldWeight in 4685, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4685)
      0.5 = coord(1/2)
    
    Abstract
    This document contains and presents test cases for the Web Ontology Language (OWL) approved by the Web Ontology Working Group. Many of the test cases illustrate the correct usage of the Web Ontology Language (OWL), and the formal meaning of its constructs. Other test cases illustrate the resolution of issues considered by the Working Group. Conformance for OWL documents and OWL document checkers is specified.
    Date
    14. 8.2011 13:33:22
  2. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.08
    0.07791616 = product of:
      0.15583232 = sum of:
        0.15583232 = sum of:
          0.10673678 = weight(_text_:language in 759) [ClassicSimilarity], result of:
            0.10673678 = score(doc=759,freq=6.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.5255505 = fieldWeight in 759, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
          0.049095538 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
            0.049095538 = score(doc=759,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.2708308 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
      0.5 = coord(1/2)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  3. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.06851579 = product of:
      0.13703158 = sum of:
        0.13703158 = product of:
          0.41109473 = sum of:
            0.41109473 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.41109473 = score(doc=1826,freq=2.0), product of:
                0.43887708 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051766515 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  4. Haslhofer, B.: Uniform SPARQL access to interlinked (digital library) sources (2007) 0.06
    0.063268594 = product of:
      0.12653719 = sum of:
        0.12653719 = sum of:
          0.07042801 = weight(_text_:language in 541) [ClassicSimilarity], result of:
            0.07042801 = score(doc=541,freq=2.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.34677336 = fieldWeight in 541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0625 = fieldNorm(doc=541)
          0.056109186 = weight(_text_:22 in 541) [ClassicSimilarity], result of:
            0.056109186 = score(doc=541,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.30952093 = fieldWeight in 541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=541)
      0.5 = coord(1/2)
    
    Abstract
    In this presentation, we therefore focus on a solution for providing uniform access to Digital Libraries and other online services. In order to enable uniform query access to heterogeneous sources, we must provide metadata interoperability in a way that a query language - in this case SPARQL - can cope with the incompatibility of the metadata in various sources without changing their already existing information models.
    Date
    26.12.2011 13:22:46
  5. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.05
    0.054812633 = product of:
      0.109625265 = sum of:
        0.109625265 = product of:
          0.32887578 = sum of:
            0.32887578 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.32887578 = score(doc=230,freq=2.0), product of:
                0.43887708 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051766515 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  6. Panzer, M.: Designing identifiers for the DDC (2007) 0.05
    0.053052336 = product of:
      0.10610467 = sum of:
        0.10610467 = sum of:
          0.059055686 = weight(_text_:language in 1752) [ClassicSimilarity], result of:
            0.059055686 = score(doc=1752,freq=10.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.29077834 = fieldWeight in 1752, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1752)
          0.047048986 = weight(_text_:22 in 1752) [ClassicSimilarity], result of:
            0.047048986 = score(doc=1752,freq=10.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.2595412 = fieldWeight in 1752, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1752)
      0.5 = coord(1/2)
    
    Content
    Some examples of identifiers for concepts follow: <http://dewey.info/concept/338.4/en/edn/22/> This identifier is used to retrieve or identify the 338.4 concept in the English-language version of Edition 22. <http://dewey.info/concept/338.4/de/edn/22/> This identifier is used to retrieve or identify the 338.4 concept in the German-language version of Edition 22. <http://dewey.info/concept/333.7-333.9/> This identifier is used to retrieve or identify the 333.7-333.9 concept across all editions and language versions. <http://dewey.info/concept/333.7-333.9/about.skos> This identifier is used to retrieve a SKOS representation of the 333.7-333.9 concept (using the "resource" element). There are several open issues at this preliminary stage of development: Use cases: URIs need to represent the range of statements or questions that could be submitted to a Dewey web service. Therefore, it seems that some general questions have to be answered first: What information does an agent have when coming to a Dewey web service? What kind of questions will such an agent ask? Placement of the {locale} component: It is still an open question if the {locale} component should be placed after the {version} component instead (<http://dewey.info/concept/338.4/edn/22/en>) to emphasize that the most important instantiation of a Dewey class is its edition, not its language version. From a services point of view, however, it could make more sense to keep the current arrangement, because users are more likely to come to the service with a present understanding of the language version they are seeking without knowing the specifics of a certain edition in which they are trying to find topics. Identification of other Dewey entities: The goal is to create a locator that does not answer all, but a lot of questions that could be asked about the DDC. Which entities are missing but should be surfaced for services or user agents? How will those services or agents interact with them? Should some entities be rendered in a different way as presented? For example, (how) should the DDC Summaries be retrievable? Would it be necessary to make the DDC Manual accessible through this identifier structure?"
  7. Baker, T.: ¬A grammar of Dublin Core (2000) 0.05
    0.0492413 = product of:
      0.0984826 = sum of:
        0.0984826 = sum of:
          0.07042801 = weight(_text_:language in 1236) [ClassicSimilarity], result of:
            0.07042801 = score(doc=1236,freq=8.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.34677336 = fieldWeight in 1236, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
          0.028054593 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
            0.028054593 = score(doc=1236,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.15476047 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
      0.5 = coord(1/2)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  8. Stapleton, M.; Adams, M.: Faceted categorisation for the corporate desktop : visualisation and interaction using metadata to enhance user experience (2007) 0.05
    0.047451444 = product of:
      0.09490289 = sum of:
        0.09490289 = sum of:
          0.052821 = weight(_text_:language in 718) [ClassicSimilarity], result of:
            0.052821 = score(doc=718,freq=2.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.26008 = fieldWeight in 718, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.046875 = fieldNorm(doc=718)
          0.04208189 = weight(_text_:22 in 718) [ClassicSimilarity], result of:
            0.04208189 = score(doc=718,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.23214069 = fieldWeight in 718, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=718)
      0.5 = coord(1/2)
    
    Abstract
    Mark Stapleton and Matt Adamson began their presentation by describing how Dow Jones' Factiva range of information services processed an average of 170,000 documents every day, drawn from over 10,000 sources in 22 languages. These documents are categorized within five facets: Company, Subject, Industry, Region and Language. The digital feeds received from information providers undergo a series of processing stages, initially to prepare them for automatic categorization and then to format them ready for distribution. The categorization stage is able to handle 98% of documents automatically, the remaining 2% requiring some form of human intervention. Depending on the source, categorization can involve any combination of 'Autocoding', 'Dictionary-based Categorizing', 'Rules-based Coding' or 'Manual Coding'
  9. Delsey, T.: ¬The Making of RDA (2016) 0.05
    0.047451444 = product of:
      0.09490289 = sum of:
        0.09490289 = sum of:
          0.052821 = weight(_text_:language in 2946) [ClassicSimilarity], result of:
            0.052821 = score(doc=2946,freq=2.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.26008 = fieldWeight in 2946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.046875 = fieldNorm(doc=2946)
          0.04208189 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
            0.04208189 = score(doc=2946,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.23214069 = fieldWeight in 2946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2946)
      0.5 = coord(1/2)
    
    Abstract
    The author revisits the development of RDA from its inception in 2005 through to its initial release in 2010. The development effort is set in the context of an evolving digital environment that was transforming both the production and dissemination of information resources and the technologies used to create, store, and access data describing those resources. The author examines the interplay between strategic commitments to align RDA with new conceptual models, emerging database structures, and metadata developments in allied communities, on the one hand, and compatibility with AACR2 legacy databases on the other. Aspects of the development effort examined include the structuring of RDA as a resource description language, organizing the new standard as a working tool, and refining guidelines and instructions for recording RDA data.
    Date
    17. 5.2016 19:22:40
  10. Caseiro, D.: Automatic language identification bibliography : Last Update: 20 September 1999 (1999) 0.04
    0.043575108 = product of:
      0.087150216 = sum of:
        0.087150216 = product of:
          0.17430043 = sum of:
            0.17430043 = weight(_text_:language in 1842) [ClassicSimilarity], result of:
              0.17430043 = score(doc=1842,freq=4.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.85822034 = fieldWeight in 1842, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1842)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This bibliography lists research in Automatic Identification of Spoken Language.
  11. Danowski, P.: Authority files and Web 2.0 : Wikipedia and the PND. An Example (2007) 0.04
    0.03954287 = product of:
      0.07908574 = sum of:
        0.07908574 = sum of:
          0.0440175 = weight(_text_:language in 1291) [ClassicSimilarity], result of:
            0.0440175 = score(doc=1291,freq=2.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.21673335 = fieldWeight in 1291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1291)
          0.03506824 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
            0.03506824 = score(doc=1291,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.19345059 = fieldWeight in 1291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1291)
      0.5 = coord(1/2)
    
    Abstract
    More and more users index everything on their own in the web 2.0. There are services for links, videos, pictures, books, encyclopaedic articles and scientific articles. All these services are library independent. But must that really be? Can't libraries help with their experience and tools to make user indexing better? On the experience of a project from German language Wikipedia together with the German person authority files (Personen Namen Datei - PND) located at German National Library (Deutsche Nationalbibliothek) I would like to show what is possible. How users can and will use the authority files, if we let them. We will take a look how the project worked and what we can learn for future projects. Conclusions - Authority files can have a role in the web 2.0 - there must be an open interface/ service for retrieval - everything that is indexed on the net with authority files can be easy integrated in a federated search - O'Reilly: You have to found ways that your data get more important that more it will be used
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  12. Neumann. M.: HAL: Hyperspace Analogue to Language (2012) 0.04
    0.037350092 = product of:
      0.074700184 = sum of:
        0.074700184 = product of:
          0.14940037 = sum of:
            0.14940037 = weight(_text_:language in 966) [ClassicSimilarity], result of:
              0.14940037 = score(doc=966,freq=4.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.7356174 = fieldWeight in 966, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.09375 = fieldNorm(doc=966)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Object
    Hyperspace Analogue to Language
  13. Wolfram Language erkennt Bilder (2015) 0.04
    0.037350092 = product of:
      0.074700184 = sum of:
        0.074700184 = product of:
          0.14940037 = sum of:
            0.14940037 = weight(_text_:language in 1872) [ClassicSimilarity], result of:
              0.14940037 = score(doc=1872,freq=16.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.7356174 = fieldWeight in 1872, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1872)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Wolfram Research hat seine Cloud-basierte Programmiersprache Wolfram Language um eine Funktion zur Bilderkennung erweitert. Der Hersteller des Computeralgebrasystems Mathematica und Betreiber der Wissens-Suchmaschine Wolfram Alpha hat seinem System die Erkennung von Bildern beigebracht. Mit der Funktion ImageIdentify bekommt man in Wolfram Language jetzt zu einem Bild eine symbolische Beschreibung des Inhalts, die sich in der Sprache danach sogar weiterverarbeiten lässt. Als Demo dieser Funktion dient die Website The Wolfram Language Image Identification Project: Dort kann man ein beliebiges Bild hochladen und sich das Ergebnis anschauen. Die Website speichert einen Thumbnail des hochgeladenen Bildes, sodass man einen Link zu der Ergebnisseite weitergeben kann. Wie so oft bei künstlicher Intelligenz sind die Ergebnisse manchmal lustig daneben, oft aber auch überraschend gut. Die Funktion arbeitet mit einem neuronalen Netz, das mit einigen -zig Millionen Bildern trainiert wurde und etwa 10.000 Objekte identifizieren kann.
    Content
    Vgl.: http://www.imageidentify.com. Eine ausführlichere Erklärung der Funktionsweise und Hintergründe findet sich in Stephen Wolframs Blog-Eintrag: "Wolfram Language Artificial Intelligence: The Image Identification Project" unter: http://blog.stephenwolfram.com/2015/05/wolfram-language-artificial-intelligence-the-image-identification-project/. Vgl. auch: https://news.ycombinator.com/item?id=8621658.
    Object
    Wolfram Language
    Source
    http://www.heise.de/newsticker/meldung/Wolfram-Language-erkennt-Bilder-2650207.html?wt_mc=rss.ho.beitrag.rdf
  14. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amode, D.; Sutskever, I.: Language models are unsupervised multitask learners 0.04
    0.037350092 = product of:
      0.074700184 = sum of:
        0.074700184 = product of:
          0.14940037 = sum of:
            0.14940037 = weight(_text_:language in 871) [ClassicSimilarity], result of:
              0.14940037 = score(doc=871,freq=16.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.7356174 = fieldWeight in 871, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=871)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on task-specific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
    Source
    https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
  15. Hausser, R.: Language and nonlanguage cognition (2021) 0.03
    0.03493781 = product of:
      0.06987562 = sum of:
        0.06987562 = product of:
          0.13975124 = sum of:
            0.13975124 = weight(_text_:language in 255) [ClassicSimilarity], result of:
              0.13975124 = score(doc=255,freq=14.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.6881071 = fieldWeight in 255, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=255)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A basic distinction in agent-based data-driven Database Semantics (DBS) is between language and nonlanguage cognition. Language cognition transfers content between agents by means of raw data. Nonlanguage cognition maps between content and raw data inside the focus agent. {\it Recognition} applies a concept type to raw data, resulting in a concept token. In language recognition, the focus agent (hearer) takes raw language-data (surfaces) produced by another agent (speaker) as input, while nonlanguage recognition takes raw nonlanguage-data as input. In either case, the output is a content which is stored in the agent's onboard short term memory. {\it Action} adapts a concept type to a purpose, resulting in a token. In language action, the focus agent (speaker) produces language-dependent surfaces for another agent (hearer), while nonlanguage action produces intentions for a nonlanguage purpose. In either case, the output is raw action data. As long as the procedural implementation of place holder values works properly, it is compatible with the DBS requirement of input-output equivalence between the natural prototype and the artificial reconstruction.
  16. Bechhofer, S.; Harmelen, F. van; Hendler, J.; Horrocks, I.; McGuinness, D.L.; Patel-Schneider, P.F.; Stein, L.A.: OWL Web Ontology Language Reference (2004) 0.03
    0.03444915 = product of:
      0.0688983 = sum of:
        0.0688983 = product of:
          0.1377966 = sum of:
            0.1377966 = weight(_text_:language in 4684) [ClassicSimilarity], result of:
              0.1377966 = score(doc=4684,freq=10.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.6784828 = fieldWeight in 4684, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4684)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Web Ontology Language OWL is a semantic markup language for publishing and sharing ontologies on the World Wide Web. OWL is developed as a vocabulary extension of RDF (the Resource Description Framework) and is derived from the DAML+OIL Web Ontology Language. This document contains a structured informal description of the full set of OWL language constructs and is meant to serve as a reference for OWL users who want to construct OWL ontologies.
  17. Oard, D.W.: Alternative approaches for cross-language text retrieval (1997) 0.03
    0.03444915 = product of:
      0.0688983 = sum of:
        0.0688983 = product of:
          0.1377966 = sum of:
            0.1377966 = weight(_text_:language in 1164) [ClassicSimilarity], result of:
              0.1377966 = score(doc=1164,freq=40.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.6784828 = fieldWeight in 1164, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The explosive growth of the Internet and other sources of networked information have made automatic mediation of access to networked information sources an increasingly important problem. Much of this information is expressed as electronic text, and it is becoming practical to automatically convert some printed documents and recorded speech to electronic text as well. Thus, automated systems capable of detecting useful documents are finding widespread application. With even a small number of languages it can be inconvenient to issue the same query repeatedly in every language, so users who are able to read more than one language will likely prefer a multilingual text retrieval system over a collection of monolingual systems. And since reading ability in a language does not always imply fluent writing ability in that language, such users will likely find cross-language text retrieval particularly useful for languages in which they are less confident of their ability to express their information needs effectively. The use of such systems can be also be beneficial if the user is able to read only a single language. For example, when only a small portion of the document collection will ever be examined by the user, performing retrieval before translation can be significantly more economical than performing translation before retrieval. So when the application is sufficiently important to justify the time and effort required for translation, those costs can be minimized if an effective cross-language text retrieval system is available. Even when translation is not available, there are circumstances in which cross-language text retrieval could be useful to a monolingual user. For example, a researcher might find a paper published in an unfamiliar language useful if that paper contains references to works by the same author that are in the researcher's native language.
    Multilingual text retrieval can be defined as selection of useful documents from collections that may contain several languages (English, French, Chinese, etc.). This formulation allows for the possibility that individual documents might contain more than one language, a common occurrence in some applications. Both cross-language and within-language retrieval are included in this formulation, but it is the cross-language aspect of the problem which distinguishes multilingual text retrieval from its well studied monolingual counterpart. At the SIGIR 96 workshop on "Cross-Linguistic Information Retrieval" the participants discussed the proliferation of terminology being used to describe the field and settled on "Cross-Language" as the best single description of the salient aspect of the problem. "Multilingual" was felt to be too broad, since that term has also been used to describe systems able to perform within-language retrieval in more than one language but that lack any cross-language capability. "Cross-lingual" and "cross-linguistic" were felt to be equally good descriptions of the field, but "crosslanguage" was selected as the preferred term in the interest of standardization. Unfortunately, at about the same time the U.S. Defense Advanced Research Projects Agency (DARPA) introduced "translingual" as their preferred term, so we are still some distance from reaching consensus on this matter.
    I will not attempt to draw a sharp distinction between retrieval and filtering in this survey. Although my own work on adaptive cross-language text filtering has led me to make this distinction fairly carefully in other presentations (c.f., (Oard 1997b)), such an proach does little to help understand the fundamental techniques which have been applied or the results that have been obtained in this case. Since it is still common to view filtering (detection of useful documents in dynamic document streams) as a kind of retrieval, will simply adopt that perspective here.
  18. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.03
    0.034257896 = product of:
      0.06851579 = sum of:
        0.06851579 = product of:
          0.20554736 = sum of:
            0.20554736 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.20554736 = score(doc=4388,freq=2.0), product of:
                0.43887708 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051766515 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  19. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.034257896 = product of:
      0.06851579 = sum of:
        0.06851579 = product of:
          0.20554736 = sum of:
            0.20554736 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.20554736 = score(doc=5669,freq=2.0), product of:
                0.43887708 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051766515 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  20. Landry, P.; Zumer, M.; Clavel-Merrin, G.: Report on cross-language subject access options (2006) 0.03
    0.03234613 = product of:
      0.06469226 = sum of:
        0.06469226 = product of:
          0.12938452 = sum of:
            0.12938452 = weight(_text_:language in 2433) [ClassicSimilarity], result of:
              0.12938452 = score(doc=2433,freq=12.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.6370634 = fieldWeight in 2433, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2433)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This report presents the results of desk-top based study of projects and initiatives in the area of linking and mapping subject tools. While its goal is to provide areas of further study for cross-language subject access in the European Library, and specifically the national libraries of the Ten New Member States, it is not restricted to cross-language mappings since some of the tools used to create links across thesauri or subject headings in the same language may also be appropriate for cross-language mapping. Tools reviewed have been selected to represent a variety of approaches (e.g. subject heading to subject heading, thesaurus to thesaurus, classification to subject heading) reflecting the variety of subject access tools in use in the European Library. The results show that there is no single solution that would be appropriate for all libraries but that parts of several initiatives may be applicable on a technical, organisational or content level.
    Source
    http://www.nuk.uni-lj.si/telmemor/docs/D3.4-Cross-language-access.pdf

Years

Languages

  • e 205
  • d 94
  • a 3
  • el 3
  • es 1
  • i 1
  • nl 1
  • More… Less…

Types

  • a 138
  • i 10
  • n 10
  • m 6
  • p 5
  • r 5
  • x 5
  • b 4
  • s 3
  • More… Less…