Search (173 results, page 1 of 9)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.24
    0.24013881 = product of:
      0.48027763 = sum of:
        0.12006941 = product of:
          0.3602082 = sum of:
            0.3602082 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.3602082 = score(doc=1826,freq=2.0), product of:
                0.38455155 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0453587 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.3602082 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.3602082 = score(doc=1826,freq=2.0), product of:
            0.38455155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0453587 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(2/4)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.19
    0.19211105 = product of:
      0.3842221 = sum of:
        0.09605552 = product of:
          0.28816655 = sum of:
            0.28816655 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.28816655 = score(doc=230,freq=2.0), product of:
                0.38455155 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0453587 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.28816655 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.28816655 = score(doc=230,freq=2.0), product of:
            0.38455155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0453587 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.5 = coord(2/4)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.12
    0.12006941 = product of:
      0.24013881 = sum of:
        0.060034703 = product of:
          0.1801041 = sum of:
            0.1801041 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.1801041 = score(doc=4388,freq=2.0), product of:
                0.38455155 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0453587 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.1801041 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.1801041 = score(doc=4388,freq=2.0), product of:
            0.38455155 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0453587 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.5 = coord(2/4)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Delsey, T.: ¬The Making of RDA (2016) 0.05
    0.046405412 = product of:
      0.18562165 = sum of:
        0.18562165 = sum of:
          0.14874879 = weight(_text_:instructions in 2946) [ClassicSimilarity], result of:
            0.14874879 = score(doc=2946,freq=2.0), product of:
              0.31902805 = queryWeight, product of:
                7.033448 = idf(docFreq=105, maxDocs=44218)
                0.0453587 = queryNorm
              0.46625614 = fieldWeight in 2946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.033448 = idf(docFreq=105, maxDocs=44218)
                0.046875 = fieldNorm(doc=2946)
          0.036872864 = weight(_text_:22 in 2946) [ClassicSimilarity], result of:
            0.036872864 = score(doc=2946,freq=2.0), product of:
              0.15883844 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0453587 = queryNorm
              0.23214069 = fieldWeight in 2946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2946)
      0.25 = coord(1/4)
    
    Abstract
    The author revisits the development of RDA from its inception in 2005 through to its initial release in 2010. The development effort is set in the context of an evolving digital environment that was transforming both the production and dissemination of information resources and the technologies used to create, store, and access data describing those resources. The author examines the interplay between strategic commitments to align RDA with new conceptual models, emerging database structures, and metadata developments in allied communities, on the one hand, and compatibility with AACR2 legacy databases on the other. Aspects of the development effort examined include the structuring of RDA as a resource description language, organizing the new standard as a working tool, and refining guidelines and instructions for recording RDA data.
    Date
    17. 5.2016 19:22:40
  5. Kempf, A.O.; Ritze, D.; Eckert, K.; Zapilko, B.: New ways of mapping knowledge organization systems : using a semi­automatic matching­procedure for building up vocabulary crosswalks (2013) 0.03
    0.034303546 = product of:
      0.13721418 = sum of:
        0.13721418 = weight(_text_:assisted in 989) [ClassicSimilarity], result of:
          0.13721418 = score(doc=989,freq=2.0), product of:
            0.30640912 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0453587 = queryNorm
            0.44781366 = fieldWeight in 989, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.046875 = fieldNorm(doc=989)
      0.25 = coord(1/4)
    
    Abstract
    Crosswalks between different vocabularies are an indispensable prerequisite for integrated and high quality search scenarios in distributed data environments. Offered through the web and linked with each other they act as a central link so that users can move back and forth between different data sources available online. In the past, crosswalks between different thesauri have primarily been developed manually. In the long run the intellectual updating of such crosswalks requires huge personnel expenses. Therefore, an integration of automatic matching procedures, as for example Ontology Matching Tools, seems an obvious need. On the basis of computer generated correspondences between the Thesaurus for Economics (STW) and the Thesaurus for the Social Sciences (TheSoz) our contribution will explore cross-border approaches between IT-assisted tools and procedures on the one hand and external quality measurements via domain experts on the other hand. The techniques that emerge enable semi-automatically performed vocabulary crosswalks.
  6. Combs, A.; Krippner, S.: Collective consciousness and the social brain (2008) 0.03
    0.034303546 = product of:
      0.13721418 = sum of:
        0.13721418 = weight(_text_:assisted in 5622) [ClassicSimilarity], result of:
          0.13721418 = score(doc=5622,freq=2.0), product of:
            0.30640912 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0453587 = queryNorm
            0.44781366 = fieldWeight in 5622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.046875 = fieldNorm(doc=5622)
      0.25 = coord(1/4)
    
    Abstract
    This paper discusses supportive neurological and social evidence for 'collective consciousness', here understood as a shared sense of being together with others in a single or unified experience. Mirror neurons in the premotor and posterior parietal cortices respond to the intentions as well as the actions of other individuals. There are also mirror neurons in the anterior insula and anterior cingulate cortices which have been implicated in empathy. Many authors have considered the likely role of such mirror systems in the development of uniquely human aspects of sociality including language. Though not without criticism, Menant has made the case that mirror-neuron assisted exchanges aided the original advent of self-consciousness and intersubjectivity. Combining these ideas with social mirror theory it is not difficult to imagine the creation of similar dynamical patterns in the emotional and even cognitive neuronal activity of individuals in human groups, creating a feeling in which the participating members experience a unified sense of consciousness. Such instances pose a kind of 'binding problem' in which participating individuals exhibit a degree of 'entanglement'.
  7. Mitchell, J.S.; Panzer, M.: Dewey linked data : Making connections with old friends and new acquaintances (2012) 0.03
    0.02858629 = product of:
      0.11434516 = sum of:
        0.11434516 = weight(_text_:assisted in 305) [ClassicSimilarity], result of:
          0.11434516 = score(doc=305,freq=2.0), product of:
            0.30640912 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0453587 = queryNorm
            0.37317806 = fieldWeight in 305, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=305)
      0.25 = coord(1/4)
    
    Abstract
    This paper explores the history, uses cases, and future plans associated with availability of the Dewey Decimal Classification (DDC) system as linked data. Parts of the Dewey Decimal Classification (DDC) system have been available as linked data since 2009. Initial efforts included the DDC Summaries (the top three levels of the DDC) in eleven languages exposed as linked data in dewey.info. In 2010, the content of dewey.info was further extended by the addition of assignable numbers and captions from the Abridged Edition 14 data files in English, Italian, and Vietnamese. During 2012, we will add assignable numbers and captions from the latest full edition database, DDC 23. In addition to the "old friends" of different Dewey language versions, institutions such as the British Library and Deutsche Nationalbibliothek have made use of Dewey linked data in bibliographic records and authority files, and AGROVOC has linked to our data at a general level. We expect to extend our linked data network shortly to "new acquaintances" such as GeoNames, ISO 639-3 language codes, and Mathematics Subject Classification. In particular, we will examine the linking process to GeoNames as an example of cross-domain vocabulary alignment. In addition to linking plans, we report on use cases that facilitate machine-assisted categorization and support discovery in the Semantic Web environment.
  8. ¬The Computer Science Ontology (CSO) (2018) 0.03
    0.02858629 = product of:
      0.11434516 = sum of:
        0.11434516 = weight(_text_:assisted in 4429) [ClassicSimilarity], result of:
          0.11434516 = score(doc=4429,freq=2.0), product of:
            0.30640912 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0453587 = queryNorm
            0.37317806 = fieldWeight in 4429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4429)
      0.25 = coord(1/4)
    
    Abstract
    The Computer Science Ontology (CSO) is a large-scale ontology of research areas that was automatically generated using the Klink-2 algorithm on the Rexplore dataset, which consists of about 16 million publications, mainly in the field of Computer Science. The Klink-2 algorithm combines semantic technologies, machine learning, and knowledge from external sources to automatically generate a fully populated ontology of research areas. Some relationships were also revised manually by experts during the preparation of two ontology-assisted surveys in the field of Semantic Web and Software Architecture. The main root of CSO is Computer Science, however, the ontology includes also a few secondary roots, such as Linguistics, Geometry, Semantics, and so on. CSO presents two main advantages over manually crafted categorisations used in Computer Science (e.g., 2012 ACM Classification, Microsoft Academic Search Classification). First, it can characterise higher-level research areas by means of hundreds of sub-topics and related terms, which enables to map very specific terms to higher-level research areas. Secondly, it can be easily updated by running Klink-2 on a set of new publications. A more comprehensive discussion of the advantages of adopting an automatically generated ontology in the scholarly domain can be found in.
  9. Korthof, G.: Information Content, Compressibility and Meaning : Published: 18 June 2000. Updated 31 May 2006. Postscript 20 Oct 2009. (2000) 0.03
    0.026295321 = product of:
      0.105181284 = sum of:
        0.105181284 = product of:
          0.21036257 = sum of:
            0.21036257 = weight(_text_:instructions in 4245) [ClassicSimilarity], result of:
              0.21036257 = score(doc=4245,freq=4.0), product of:
                0.31902805 = queryWeight, product of:
                  7.033448 = idf(docFreq=105, maxDocs=44218)
                  0.0453587 = queryNorm
                0.6593858 = fieldWeight in 4245, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.033448 = idf(docFreq=105, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4245)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In New Scientist issue 18 Sept 1999, "Life force" pp27-30 Paul Davies writes "an apparently random sequence such as 110101001010010111... cannot be condensed into a simple set of instructions, so it has a high information content." (p29). This notion of 'information content' leads to paradoxes. Consider random number generator software. Let it generate 100 and 1000 random numbers. According to the above definition the second sequence of numbers has an information content ten times higher than the first, because its description would be ten times longer. However they are both generated by the same simple set of instructions, so should have exactly the same 'information content'. There is the paradox. It seems clear that this measure of 'information content' misses the point. It measures compressibility of a sequence, not 'information content'. One needs meaning of a sequence to capture information content.
  10. Web search service features (2002) 0.02
    0.024791464 = product of:
      0.09916586 = sum of:
        0.09916586 = product of:
          0.19833171 = sum of:
            0.19833171 = weight(_text_:instructions in 923) [ClassicSimilarity], result of:
              0.19833171 = score(doc=923,freq=2.0), product of:
                0.31902805 = queryWeight, product of:
                  7.033448 = idf(docFreq=105, maxDocs=44218)
                  0.0453587 = queryNorm
                0.62167484 = fieldWeight in 923, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.033448 = idf(docFreq=105, maxDocs=44218)
                  0.0625 = fieldNorm(doc=923)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The table shows some of the features and techniques for the most common general Web search services to show how to use them and to help decide which may be the most appropriate. See the notes below that explain the headings. Each service also provides more detailed instructions. Note that some features will be available under an 'advanced', 'power' or other further search option and not from the main page.
  11. Resource Description & Access (RDA) (o.J.) 0.02
    0.024791464 = product of:
      0.09916586 = sum of:
        0.09916586 = product of:
          0.19833171 = sum of:
            0.19833171 = weight(_text_:instructions in 2438) [ClassicSimilarity], result of:
              0.19833171 = score(doc=2438,freq=2.0), product of:
                0.31902805 = queryWeight, product of:
                  7.033448 = idf(docFreq=105, maxDocs=44218)
                  0.0453587 = queryNorm
                0.62167484 = fieldWeight in 2438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.033448 = idf(docFreq=105, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2438)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    RDA Blog or Resource Description & Access Blog is a blog on Resource Description and Access (RDA), a new library cataloging standard that provides instructions and guidelines on formulating data for resource description and discovery, organized based on the Functional Requirements for Bibliographic Records (FRBR), intended for use by libraries and other cultural organizations replacing Anglo-American Cataloguing Rules (AACR2). Free for everyone Forever.
  12. Liu, S.: Decomposing DDC synthesized numbers (1996) 0.02
    0.015494664 = product of:
      0.061978657 = sum of:
        0.061978657 = product of:
          0.12395731 = sum of:
            0.12395731 = weight(_text_:instructions in 5969) [ClassicSimilarity], result of:
              0.12395731 = score(doc=5969,freq=2.0), product of:
                0.31902805 = queryWeight, product of:
                  7.033448 = idf(docFreq=105, maxDocs=44218)
                  0.0453587 = queryNorm
                0.38854676 = fieldWeight in 5969, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.033448 = idf(docFreq=105, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5969)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Much literature has been written speculating upon how classification can be used in online catalogs to improve information retrieval. While some empirical studies have been done exploring whether the direct use of traditional classification schemes designed for a manual environment is effective and efficient in the online environment, none has manipulated these manual classifications in such a w ay as to take full advantage of the power of both the classification and computer. It has been suggested by some authors, such as Wajenberg and Drabenstott, that this power could be realized if the individual components of synthesized DDC numbers could be identified and indexed. This paper looks at the feasibility of automatically decomposing DDC synthesized numbers and the implications of such decomposition for information retrieval. Based on an analysis of the instructions for synthesizing numbers in the main class Arts (700) and all DDC Tables, 17 decomposition rules were defined, 13 covering the Add Notes and four the Standard Subdivisions. 1,701 DDC synthesized numbers were decomposed by a computer system called DND (Dewey Number Decomposer), developed by the author. From the 1,701 numbers, 600 were randomly selected fo r examination by three judges, each evaluating 200 numbers. The decomposition success rate was 100% and it was concluded that synthesized DDC numbers can be accurately decomposed automatically. The study has implications for information retrieval, expert systems for assigning DDC numbers, automatic indexing, switching language development, enhancing classifiers' work, teaching library school students, and providing quality control for DDC number assignments. These implications were explored using a prototype retrieval system.
  13. Galeffi, A.; Sardo, A.L.: Cataloguing, a necessary evil : critical aspects of RDA (2016) 0.02
    0.015494664 = product of:
      0.061978657 = sum of:
        0.061978657 = product of:
          0.12395731 = sum of:
            0.12395731 = weight(_text_:instructions in 2952) [ClassicSimilarity], result of:
              0.12395731 = score(doc=2952,freq=2.0), product of:
                0.31902805 = queryWeight, product of:
                  7.033448 = idf(docFreq=105, maxDocs=44218)
                  0.0453587 = queryNorm
                0.38854676 = fieldWeight in 2952, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.033448 = idf(docFreq=105, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2952)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The Toolkit designed by the RDA Steering Committee makes Resource Description and Access available on the web, together with other useful documents (workflows, mappings, etc.). Reading, learning and memorizing are interconnected, and a working tool should make these activities faster and easier to perform. Some issues arise while verifying the real easiness of use and learning of the tool. The practical and formal requirements for a cataloguing code include plain language, ease of memorisation, clarity of instructions, familiarity for users, predictability and reproducibility of solutions, and general usability. From a formal point of view, the RDA text does not appear to be conceived for an uninterrupted reading, but just for reading of few paragraphs for temporary catalographic needs. From a content point of view, having a syndetic view of the description of a resource is rather difficult: catalographic details are scattered and their re-organization is not easy. The visualisation and logical organisation in the Toolkit could be improved: the table of contents occupies a sizable portion of the screen and resizing or hiding it is not easy; the indentation leaves little space to the words; inhomogeneous font styles (italic and bold) and poor contrast between background and text colours make reading not easy; simultaneous visualization of two or more parts of the text is not allowed; and Toolkit's icons are less intuitive than expected. In the conclusion, some suggestions on how to improve the Toolkit's aspects and usability are provided.
  14. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.02
    0.015008676 = product of:
      0.060034703 = sum of:
        0.060034703 = product of:
          0.1801041 = sum of:
            0.1801041 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.1801041 = score(doc=5669,freq=2.0), product of:
                0.38455155 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0453587 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  15. Heery, R.; Carpenter, L.; Day, M.: Renardus project developments and the wider digital library context (2001) 0.01
    0.014293145 = product of:
      0.05717258 = sum of:
        0.05717258 = weight(_text_:assisted in 1219) [ClassicSimilarity], result of:
          0.05717258 = score(doc=1219,freq=2.0), product of:
            0.30640912 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0453587 = queryNorm
            0.18658903 = fieldWeight in 1219, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1219)
      0.25 = coord(1/4)
    
    Abstract
    Funding from the UK Electronic Libraries (eLib) programme and the European Community's Fourth Framework programme assisted the initial emergence of information gateways (e.g., SOSIG, EEVL, OMNI in the UK, and EELS in Sweden). Other gateways have been developed by initiatives co-ordinated by national libraries (such as DutchESS in the Netherlands, and AVEL and EdNA in Australia) and by universities and research funding bodies (e.g., GEM in the US, the Finnish Virtual Library, and the German SSG-FI services). An account of the emergence of subject gateways since the mid-1990s by Dempsey gives an historical perspective -- informed by UK experience in particular -- and also considers the future development of subject gateways in relation to other services. When considering the development and future of gateways, it would be helpful to have a clear definition of the service offered by a so-called 'subject gateway'. Precise definitions of 'information gateways', 'subject gateways' and 'quality controlled subject gateways' have been debated elsewhere. Koch has reviewed definitions and suggested typologies that are useful, not least in showing the differences that exist between broadly similar services. Working definitions that we will use in this article are that a subject gateway provides a search service to high quality Web resources selected from a particular subject area, whereas information gateways have a wider criteria for selection of resources, e.g., a national approach. Inevitably in a rapidly changing international environment different people perceive different emphases in attempts to label services, the significant issue is that users, developers and designers can recognise and benefit from commonalties in approach.
  16. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; Amodei, D.: Language models are few-shot learners (2020) 0.01
    0.012395732 = product of:
      0.04958293 = sum of:
        0.04958293 = product of:
          0.09916586 = sum of:
            0.09916586 = weight(_text_:instructions in 872) [ClassicSimilarity], result of:
              0.09916586 = score(doc=872,freq=2.0), product of:
                0.31902805 = queryWeight, product of:
                  7.033448 = idf(docFreq=105, maxDocs=44218)
                  0.0453587 = queryNorm
                0.31083742 = fieldWeight in 872, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.033448 = idf(docFreq=105, maxDocs=44218)
                  0.03125 = fieldNorm(doc=872)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
  17. Information als Rohstoff für Innovation : Programm der Bundesregierung 1996-2000 (1996) 0.01
    0.0122909555 = product of:
      0.049163822 = sum of:
        0.049163822 = product of:
          0.098327644 = sum of:
            0.098327644 = weight(_text_:22 in 5449) [ClassicSimilarity], result of:
              0.098327644 = score(doc=5449,freq=2.0), product of:
                0.15883844 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0453587 = queryNorm
                0.61904186 = fieldWeight in 5449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5449)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 2.1997 19:26:34
  18. Ask me[@sk.me]: your global information guide : der Wegweiser durch die Informationswelten (1996) 0.01
    0.0122909555 = product of:
      0.049163822 = sum of:
        0.049163822 = product of:
          0.098327644 = sum of:
            0.098327644 = weight(_text_:22 in 5837) [ClassicSimilarity], result of:
              0.098327644 = score(doc=5837,freq=2.0), product of:
                0.15883844 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0453587 = queryNorm
                0.61904186 = fieldWeight in 5837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5837)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30.11.1996 13:22:37
  19. Kosmos Weltatlas 2000 : Der Kompass für das 21. Jahrhundert. Inklusive Welt-Routenplaner (1999) 0.01
    0.0122909555 = product of:
      0.049163822 = sum of:
        0.049163822 = product of:
          0.098327644 = sum of:
            0.098327644 = weight(_text_:22 in 4085) [ClassicSimilarity], result of:
              0.098327644 = score(doc=4085,freq=2.0), product of:
                0.15883844 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0453587 = queryNorm
                0.61904186 = fieldWeight in 4085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4085)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    7.11.1999 18:22:39
  20. Mitchell, J.S.: DDC 22 : an introduction (2003) 0.01
    0.012023993 = product of:
      0.04809597 = sum of:
        0.04809597 = product of:
          0.09619194 = sum of:
            0.09619194 = weight(_text_:22 in 1936) [ClassicSimilarity], result of:
              0.09619194 = score(doc=1936,freq=10.0), product of:
                0.15883844 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0453587 = queryNorm
                0.6055961 = fieldWeight in 1936, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1936)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Dewey Decimal Classification and Relative Index, Edition 22 (DDC 22) will be issued simultaneously in print and web versions in July 2003. The new edition is the first full print update to the Dewey Decimal Classification system in seven years-it includes several significant updates and many new numbers and topics. DDC 22 also features some fundamental structural changes that have been introduced with the goals of promoting classifier efficiency and improving the DDC for use in a variety of applications in the web environment. Most importantly, the content of the new edition has been shaped by the needs and recommendations of Dewey users around the world. The worldwide user community has an important role in shaping the future of the DDC.
    Object
    DDC-22

Years

Languages

  • d 86
  • e 80
  • el 2
  • a 1
  • i 1
  • nl 1
  • More… Less…

Types

  • a 76
  • i 10
  • m 5
  • b 2
  • r 2
  • s 2
  • n 1
  • x 1
  • More… Less…