Search (504 results, page 1 of 26)

  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.05
    0.05334647 = product of:
      0.080019705 = sum of:
        0.0033183133 = weight(_text_:a in 3780) [ClassicSimilarity], result of:
          0.0033183133 = score(doc=3780,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.06369744 = fieldWeight in 3780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3780)
        0.076701395 = sum of:
          0.046094913 = weight(_text_:de in 3780) [ClassicSimilarity], result of:
            0.046094913 = score(doc=3780,freq=2.0), product of:
              0.19416152 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.045180224 = queryNorm
              0.23740499 = fieldWeight in 3780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3780)
          0.030606484 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
            0.030606484 = score(doc=3780,freq=2.0), product of:
              0.15821345 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045180224 = queryNorm
              0.19345059 = fieldWeight in 3780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3780)
      0.6666667 = coord(2/3)
    
    Date
    19. 8.2017 9:24:22
    Source
    http://www.dnb.de/SharedDocs/Downloads/DE/DNB/inhaltserschliessung/automatischeInhaltserschliessung.pdf?__blob=publicationFile
    Type
    a
  2. Mache, B.; Klaffki, L.: ¬Das DARIAH-DE Repository : Elementarer Teil einer modularen Infrastruktur für geistes- und kulturwissenschaftliche Forschungsdaten (2018) 0.05
    0.046119012 = product of:
      0.069178514 = sum of:
        0.0046456386 = weight(_text_:a in 4485) [ClassicSimilarity], result of:
          0.0046456386 = score(doc=4485,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.089176424 = fieldWeight in 4485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4485)
        0.064532876 = product of:
          0.12906575 = sum of:
            0.12906575 = weight(_text_:de in 4485) [ClassicSimilarity], result of:
              0.12906575 = score(doc=4485,freq=8.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.66473395 = fieldWeight in 4485, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4485)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    DARIAH-DE unterstützt mit digitalen Ressourcen und Methoden arbeitende Geistes- und Kulturwissenschaftler/innen in Forschung und Lehre. Forschungsdaten und Wissenschaftliche Sammlungen haben dabei eine zentrale Bedeutung. Im Rahmen einer Forschungsdaten-Föderationsarchitektur stehen eine Reihe von Werkzeugen zur Verfügung, mit denen Daten gefunden, vernetzt, publiziert und archiviert werden können. Hierzu zählt auch das DARIAH-DE Repository, das den Forschenden eine sichere, langfristige und nachhaltige Speicherung sowie die Veröffentlichung der Forschungsdaten ermöglicht.
    Object
    DARIAH-DE
    Type
    a
  3. Rocha, R.; Cobo, A.: Automatización de procesos de categorización jerárquica documental en las organizaciones (2010) 0.04
    0.038007386 = product of:
      0.05701108 = sum of:
        0.011379444 = weight(_text_:a in 4838) [ClassicSimilarity], result of:
          0.011379444 = score(doc=4838,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.21843673 = fieldWeight in 4838, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4838)
        0.045631636 = product of:
          0.09126327 = sum of:
            0.09126327 = weight(_text_:de in 4838) [ClassicSimilarity], result of:
              0.09126327 = score(doc=4838,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.47003788 = fieldWeight in 4838, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4838)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In a global context characterized by the massive use of information technology and communications any organization needs to optimize the search and document management processes. In this paper an analysis of modern document management techniques and computational strategies with specialized language resources is presented and a model that can be used in automatic text categorization in the context of organizations is proposed.As a particular case we describe a classification system according to the taxonomy JEL (Journal of Economic Literature) and that makes use of multilingual glossaries for hierarchical classifications of scientific and technical documents related to the business functional areas.
    Type
    a
  4. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.03
    0.033280488 = product of:
      0.04992073 = sum of:
        0.0066366266 = weight(_text_:a in 3582) [ClassicSimilarity], result of:
          0.0066366266 = score(doc=3582,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12739488 = fieldWeight in 3582, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=3582)
        0.043284103 = product of:
          0.08656821 = sum of:
            0.08656821 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.08656821 = score(doc=3582,freq=4.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
    Type
    a
  5. Trincardi, G.: Diese beiden Bilder sind identisch - und das Gehirn spielt dir einen Streich (2018) 0.03
    0.028123489 = product of:
      0.042185232 = sum of:
        0.0053093014 = weight(_text_:a in 3658) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=3658,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 3658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3658)
        0.03687593 = product of:
          0.07375186 = sum of:
            0.07375186 = weight(_text_:de in 3658) [ClassicSimilarity], result of:
              0.07375186 = score(doc=3658,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.37984797 = fieldWeight in 3658, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3658)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    https://motherboard.vice.com/de/article/zmwmy9/diese-beiden-bilder-sind-identisch-und-das-gehirn-spielt-dir-einen-streich
    Type
    a
  6. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.03
    0.02806764 = product of:
      0.042101458 = sum of:
        0.011494976 = weight(_text_:a in 5865) [ClassicSimilarity], result of:
          0.011494976 = score(doc=5865,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22065444 = fieldWeight in 5865, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=5865)
        0.030606484 = product of:
          0.061212968 = sum of:
            0.061212968 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.061212968 = score(doc=5865,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques.
    Date
    22. 2.2017 12:51:57
    Type
    a
  7. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.03
    0.02712456 = product of:
      0.04068684 = sum of:
        0.010387965 = weight(_text_:a in 3450) [ClassicSimilarity], result of:
          0.010387965 = score(doc=3450,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19940455 = fieldWeight in 3450, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3450)
        0.030298874 = product of:
          0.060597748 = sum of:
            0.060597748 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.060597748 = score(doc=3450,freq=4.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The design and architecture of MIaS (Math Indexer and Searcher), a system for mathematics retrieval is presented, and design decisions are discussed. We argue for an approach based on Presentation MathML using a similarity of math subformulae. The system was implemented as a math-aware search engine based on the state-ofthe-art system Apache Lucene. Scalability issues were checked against more than 400,000 arXiv documents with 158 million mathematical formulae. Almost three billion MathML subformulae were indexed using a Solr-compatible Lucene.
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
    Type
    a
  8. Ceynowa, K.: In Frankfurt lesen jetzt zuerst Maschinen : Deutsche Nationalbibliothek (2017) 0.02
    0.024608051 = product of:
      0.036912076 = sum of:
        0.0046456386 = weight(_text_:a in 3812) [ClassicSimilarity], result of:
          0.0046456386 = score(doc=3812,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.089176424 = fieldWeight in 3812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3812)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 3812) [ClassicSimilarity], result of:
              0.064532876 = score(doc=3812,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 3812, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3812)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl. auch: Henze, V., U. Junger u. E. Mödden: Grundzüge und erste Schritte der künftigen inhaltlichen Erschliessung von Publikationen in der Deutschen Nationalbibliothek unter: http://www.dnb.de/DE/Erwerbung/Inhaltserschliessung/grundzuegeInhaltserschliessungMai2017.html. Vgl. auch: Wiesenmüller, H.: Das neue Sacherschließungskonzept der DNB in der FAZ. Unter: https://www.basiswissen-rda.de/neues-sacherschliessungskonzept-faz/. Vgl. auch die Diskussion zu dem Komplex unter: https://www.basiswissen-rda.de/neues-sacherschliessungskonzept-faz/.
    Type
    a
  9. Auer, S.: Towards an Open Research Knowledge Graph : vor einer Revolutionierung des wissenschaftlichen Arbeitens (2018) 0.02
    0.024608051 = product of:
      0.036912076 = sum of:
        0.0046456386 = weight(_text_:a in 4111) [ClassicSimilarity], result of:
          0.0046456386 = score(doc=4111,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.089176424 = fieldWeight in 4111, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4111)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 4111) [ClassicSimilarity], result of:
              0.064532876 = score(doc=4111,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 4111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4111)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Vgl. auch eine weitere TIB-Veröffentlichung unter: (https://www.tib.eu/de/service/aktuelles/detail/tib-veroeffentlicht-positionspapier-zu-open-research-knowledge-graph/).
    Type
    a
  10. Sales, R. de; Pires, T.B.: ¬The classification of Harris : influences of Bacon and Hegel in the universe of library classification (2017) 0.02
    0.023747265 = product of:
      0.035620898 = sum of:
        0.007963953 = weight(_text_:a in 3860) [ClassicSimilarity], result of:
          0.007963953 = score(doc=3860,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 3860, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3860)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 3860) [ClassicSimilarity], result of:
              0.055313893 = score(doc=3860,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 3860, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3860)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The studies of library classifications generally interact with a historical approach that contextualizes the research and with the ideas related to classification that are typical of Philosophy. In the 19th century, the North-American philosopher and educator William Torrey Harris developed a book classification at the St. Louis Public School, based on Francis Bacon and Georg Wilhelm Friedrich Hegel. The objective of the present study is to analyze Harris's classification, reflecting upon his theoretical and philosophical backgrounds in order to understand Harris's contribution to Knowledge Organization (KO). To achieve such objective, this study adopts a critical - descriptive approach for the analysis. The results show some influences of Bacon and Hegel in Harris's classification
    Type
    a
  11. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.02
    0.022692783 = product of:
      0.034039173 = sum of:
        0.007963953 = weight(_text_:a in 4639) [ClassicSimilarity], result of:
          0.007963953 = score(doc=4639,freq=18.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 4639, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.02607522 = product of:
          0.05215044 = sum of:
            0.05215044 = weight(_text_:de in 4639) [ClassicSimilarity], result of:
              0.05215044 = score(doc=4639,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.26859307 = fieldWeight in 4639, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Type
    a
  12. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.02
    0.021329116 = product of:
      0.031993672 = sum of:
        0.0075084865 = weight(_text_:a in 1149) [ClassicSimilarity], result of:
          0.0075084865 = score(doc=1149,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.14413087 = fieldWeight in 1149, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1149)
        0.024485188 = product of:
          0.048970375 = sum of:
            0.048970375 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.048970375 = score(doc=1149,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
    Type
    a
  13. Landwehr, A.: China schafft digitales Punktesystem für den "besseren" Menschen (2018) 0.02
    0.021329116 = product of:
      0.031993672 = sum of:
        0.0075084865 = weight(_text_:a in 4314) [ClassicSimilarity], result of:
          0.0075084865 = score(doc=4314,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.14413087 = fieldWeight in 4314, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4314)
        0.024485188 = product of:
          0.048970375 = sum of:
            0.048970375 = weight(_text_:22 in 4314) [ClassicSimilarity], result of:
              0.048970375 = score(doc=4314,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.30952093 = fieldWeight in 4314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4314)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 6.2018 14:29:46
    Type
    a
  14. Fuchs, C.; Pampel, H.; Vierkant, P.: ORCID in Deutschland : Ergebnisse einer Bestandsaufnahme im Jahr 2016 (2017) 0.02
    0.021092616 = product of:
      0.031638924 = sum of:
        0.0039819763 = weight(_text_:a in 3857) [ClassicSimilarity], result of:
          0.0039819763 = score(doc=3857,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.07643694 = fieldWeight in 3857, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3857)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 3857) [ClassicSimilarity], result of:
              0.055313893 = score(doc=3857,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 3857, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3857)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Die Open Researcher and Contributor ID, kurz ORCID, ist eine eindeutige Kennung für Forscherinnen und Forscher und ermöglicht die Verbindung zwischen Wissenschaftlerinnen und Wissenschaftlern mit ihren Aufsätzen, Forschungsdaten und weiteren Produkten des wissenschaftlichen Schaffens. Im Rahmen des dreijährigen DFG-Projektes "ORCID DE - Förderung der Open Researcher and Contributor ID in Deutschland" wird die vielerorts erwogene Implementierung der ORCID an Hochschulen und außeruniversitären Forschungseinrichtungen gefördert. Der vorliegende Beitrag bietet einen umfassenden Überblick über die Ergebnisse der im Rahmen des Projekts durchgeführten Umfrage zum Stand der Implementierung von ORCID an wissenschaftlichen Einrichtungen in Deutschland. Die Umfrage wurde im Zeitraum vom 13.07.2016 bis 03.08.2016 durchgeführt und bietet zahlreiche Erkenntnisse sowohl über den Stand der Implementierung von ORCID an wissenschaftlichen Einrichtungen in Deutschland als auch über bestehende technische, rechtliche und organisatorische Hürden bei der Implementierung des Dienstes.
    Type
    a
  15. Strube, S.: Eine Woche billige Klickarbeit machte mich zum digitalen Lumpenproletarier (2014) 0.02
    0.021092616 = product of:
      0.031638924 = sum of:
        0.0039819763 = weight(_text_:a in 4164) [ClassicSimilarity], result of:
          0.0039819763 = score(doc=4164,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.07643694 = fieldWeight in 4164, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4164)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 4164) [ClassicSimilarity], result of:
              0.055313893 = score(doc=4164,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 4164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4164)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    https://motherboard.vice.com/de/article/aekpag/Billige-Clickarbeit-und-das-digitale-Lumpenproletariat-919
    Type
    a
  16. Graff, B.: "Spinne ich, wenn ich denke, dass sie ausschließlich meine Arbeit genutzt haben?" (2019) 0.02
    0.021092616 = product of:
      0.031638924 = sum of:
        0.0039819763 = weight(_text_:a in 4667) [ClassicSimilarity], result of:
          0.0039819763 = score(doc=4667,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.07643694 = fieldWeight in 4667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4667)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 4667) [ClassicSimilarity], result of:
              0.055313893 = score(doc=4667,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 4667, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4667)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ein 19-Jähriger programmiert einen Algorithmus, der Gemälde malt. Ein Künstlergruppe nutzt ihn und verkauft das Ergebnis - für mehr als 400 000 Dollar. Gretchenfrage: Wer ist der Urheber? Am 25. Oktober war der 19-jährige Robbie Barrat aktiver auf seinem Twitteraccount als sonst. Barrat, er nennt sich in der Netzwelt "@DrBeef_", knallte die Timeline mit seinen Posts zu, jedenfalls für seine Verhältnisse. Denn an diesem Tag wurde beim Auktionshaus Christie's in New York ein Porträt versteigert, das glasklar der Pariser Künstlergruppe "Obvious" zugerechnet wird - wobei es irritierenderweise in einem Stil verfertigt ist, den unzählige Porträts aufweisen, die wiederum allesamt glasklar Robbie Barrat zugeordnet werden. Das in New York gehandelte Bild zeigt auf den ersten Blick einen Herrn des frühen 20. Jahrhunderts, einen "Edmond de Belamy". Das so betitelte Bild, das etwas vom dynamischen Furor und der Aufbruchstimmung der damals noch jungen Avantgarde vermittelt, hat etwas Skizzenhaftes. Etwas Verwaschenes und Probehaftes, auch wenn man einen Naturalismus spüren kann. Sagen wir so: Als Kunstwerk ist es keine Offenbarung, es wirkt eher derangiert in seiner unklaren Dimensionierung. Aber ganz misslungen ist es auch nicht.
    Type
    a
  17. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.02
    0.021067886 = product of:
      0.031601828 = sum of:
        0.0056313644 = weight(_text_:a in 1967) [ClassicSimilarity], result of:
          0.0056313644 = score(doc=1967,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10809815 = fieldWeight in 1967, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.025970465 = product of:
          0.05194093 = sum of:
            0.05194093 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.05194093 = score(doc=1967,freq=4.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Type
    a
  18. Momeni, F.; Mayr, P.: Analyzing the research output presented at European Networked Knowledge Organization Systems workshops (2000-2015) (2016) 0.02
    0.020783756 = product of:
      0.031175632 = sum of:
        0.008128175 = weight(_text_:a in 3106) [ClassicSimilarity], result of:
          0.008128175 = score(doc=3106,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15602624 = fieldWeight in 3106, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3106)
        0.023047457 = product of:
          0.046094913 = sum of:
            0.046094913 = weight(_text_:de in 3106) [ClassicSimilarity], result of:
              0.046094913 = score(doc=3106,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23740499 = fieldWeight in 3106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3106)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this paper we analyze a major part of the research output of the Networked Knowledge Organization Systems (NKOS) community in the period 2000 to 2015 from a network analytical perspective. We fo- cus on the paper output presented at the European NKOS workshops in the last 15 years. Our open dataset, the "NKOS bibliography", includes 14 workshop agendas (ECDL 2000-2010, TPDL 2011-2015) and 4 special issues on NKOS (2001, 2004, 2006 and 2015) which cover 171 papers with 218 distinct authors in total. A focus of the analysis is the visualization of co-authorship networks in this interdisciplinary eld. We used standard network analytic measures like degree and betweenness centrality to de- scribe the co-authorship distribution in our NKOS dataset. We can see in our dataset that 15% (with degree=0) of authors had no co-authorship with others and 53% of them had a maximum of 3 cooperations with other authors. 32% had at least 4 co-authors for all of their papers. The NKOS co-author network in the "NKOS bibliography" is a typical co- authorship network with one relatively large component, many smaller components and many isolated co-authorships or triples.
    Type
    a
  19. Araújo, P.C. de.; Tennis, J.; Guimarães, J.A.: Metatheory and knowledge organization (2017) 0.02
    0.020311622 = product of:
      0.030467432 = sum of:
        0.0074199745 = weight(_text_:a in 3858) [ClassicSimilarity], result of:
          0.0074199745 = score(doc=3858,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.14243183 = fieldWeight in 3858, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3858)
        0.023047457 = product of:
          0.046094913 = sum of:
            0.046094913 = weight(_text_:de in 3858) [ClassicSimilarity], result of:
              0.046094913 = score(doc=3858,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23740499 = fieldWeight in 3858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3858)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Metatheory is meta-analytic work that comes from sociology and its purpose is the analysis of theory. Metatheory is a common form of scholarship in knowledge organization (KO). This paper presents an analysis of five papers that are metatheoretical investigations in KO. The papers were published between 2008 and 2015 in the journal Knowledge Organization. The preliminary findings from this paper are that though the authors do metatheoretical work it is not made explicit by the majority of the authors. Of the four types of metatheoretical work, metatheorizing in order to better understand theory (Mu) is most popular. Further, the external/intellectual approach, which imports analytical lenses from other fields, was applied in four of the five papers. And, the use of metatheory as a method of analysis is closely related to these authors' concern about epistemological, theoretical and methodological issues in the KO domain. Metatheory, while not always explicitly acknowledged as a method, is a valuable tool to better understand the foundations, the development of research, and the influence from other domains on KO.
    Type
    a
  20. Scheven, E.: Geokoordinaten in Bibliotheksdaten : Grundlage für innovative Nachnutzung (2015) 0.02
    0.019862993 = product of:
      0.029794488 = sum of:
        0.0053093014 = weight(_text_:a in 308) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=308,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 308, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=308)
        0.024485188 = product of:
          0.048970375 = sum of:
            0.048970375 = weight(_text_:22 in 308) [ClassicSimilarity], result of:
              0.048970375 = score(doc=308,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.30952093 = fieldWeight in 308, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=308)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    16.11.2015 18:22:47
    Type
    a

Languages

  • d 300
  • e 189
  • i 6
  • f 2
  • a 1
  • el 1
  • es 1
  • no 1
  • More… Less…