Search (793 results, page 1 of 40)

  • × year_i:[2010 TO 2020}
  1. (2013 ff.) 0.18
    0.17855053 = product of:
      0.35710105 = sum of:
        0.35710105 = sum of:
          0.24963866 = weight(_text_:z in 2851) [ClassicSimilarity], result of:
            0.24963866 = score(doc=2851,freq=2.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.94351256 = fieldWeight in 2851, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.125 = fieldNorm(doc=2851)
          0.10746239 = weight(_text_:22 in 2851) [ClassicSimilarity], result of:
            0.10746239 = score(doc=2851,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.61904186 = fieldWeight in 2851, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=2851)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
    Type
    z
  2. Lenzen, M.: Künstliche Intelligenz : was sie kann & was uns erwartet (2018) 0.07
    0.07195387 = product of:
      0.14390774 = sum of:
        0.14390774 = sum of:
          0.110325746 = weight(_text_:z in 4295) [ClassicSimilarity], result of:
            0.110325746 = score(doc=4295,freq=4.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.41697758 = fieldWeight in 4295, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4295)
          0.033582 = weight(_text_:22 in 4295) [ClassicSimilarity], result of:
            0.033582 = score(doc=4295,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.19345059 = fieldWeight in 4295, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4295)
      0.5 = coord(1/2)
    
    Classification
    Z 010
    Date
    18. 6.2018 19:22:02
    KAB
    Z 010
  3. Bao, Z.; Han, Z.: What drives users' participation in online social Q&A communities? : an empirical study based on social cognitive theory (2019) 0.07
    0.07195387 = product of:
      0.14390774 = sum of:
        0.14390774 = sum of:
          0.110325746 = weight(_text_:z in 5497) [ClassicSimilarity], result of:
            0.110325746 = score(doc=5497,freq=4.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.41697758 = fieldWeight in 5497, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5497)
          0.033582 = weight(_text_:22 in 5497) [ClassicSimilarity], result of:
            0.033582 = score(doc=5497,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.19345059 = fieldWeight in 5497, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5497)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
  4. Köbler, J.; Niederklapfer, T.: Kreuzkonkordanzen zwischen RVK-BK-MSC-PACS der Fachbereiche Mathematik un Physik (2010) 0.07
    0.066956446 = product of:
      0.13391289 = sum of:
        0.13391289 = sum of:
          0.093614504 = weight(_text_:z in 4408) [ClassicSimilarity], result of:
            0.093614504 = score(doc=4408,freq=2.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.35381722 = fieldWeight in 4408, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.046875 = fieldNorm(doc=4408)
          0.040298395 = weight(_text_:22 in 4408) [ClassicSimilarity], result of:
            0.040298395 = score(doc=4408,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.23214069 = fieldWeight in 4408, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4408)
      0.5 = coord(1/2)
    
    Abstract
    Unser Projekt soll eine Kreuzkonkordanz zwischen den Universalklassifikationen wie der "Regensburger Verbundsklassifikation (RVK)" und der "Basisklassifikation (BK)" sowie den Fachklassifikationen "Mathematics Subject Classification (MSC2010)" und "Physics and Astronomy Classification Scheme (PACS2010)" in den Fachgebieten Mathematik und Physik herstellen. Fazit: "Die klassifikatorische Übereinstmmung zwischen Regensburger Verbundklassifikation und Physics and Astronomy Classification Scheme war in einzelnen Fachbereichen (z. B. Kernphysik) recht gut. Doch andere Fachbereiche (z.B. Polymerphysik, Mineralogie) stimmten sehr wenig überein. Insgesamt konnten wir 890 einfache Verbindungen erstellen. Mehrfachverbindungen wurden aus technischen Gründen nicht mitgezählt. Das Projekt war insgesamt sehr umfangreich, daher konnte es im Rahmen der zwanzig Projekttage nicht erschöpfend behandelt werden. Eine Weiterentwicklung, insbesondere hinsichtlich des kollektiven Zuganges in Form eines Webformulars und der automatischen Klassifizierung erscheint jedoch sinnvoll."
    Pages
    22 S
  5. Engerer, V.: Metapher und Wissenstransfers im informationsbezogenen Diskurs (2013) 0.07
    0.066956446 = product of:
      0.13391289 = sum of:
        0.13391289 = sum of:
          0.093614504 = weight(_text_:z in 659) [ClassicSimilarity], result of:
            0.093614504 = score(doc=659,freq=2.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.35381722 = fieldWeight in 659, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.046875 = fieldNorm(doc=659)
          0.040298395 = weight(_text_:22 in 659) [ClassicSimilarity], result of:
            0.040298395 = score(doc=659,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.23214069 = fieldWeight in 659, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=659)
      0.5 = coord(1/2)
    
    Abstract
    Der vorliegende Beitrag1 ist ein Versuch, Eigenschaften von Schöns generativer Metapher auf das "statische" Resultat als Fachterminologie, entstanden durch Übertragung eines wissenschaftlichen Bereiches in einen anderen, anzuwenden. Metapher gilt in diesem Bereich als wissenstransferierendes Verfahren der Übertragung von Konzepten einer Disziplin auf eine andere. Weiterhin wird Metapher als Teil des sprachlichen Jargons in der informationswissenschaftlichen und bibliothekarischen Praxis thematisiert. Ein kurzer Durchgang des dänischen bibliotheksmetaphorischen Wortschatzes zeigt u. a., dass in dieser Domäne ein "ontologisches Erfahrungsgefälle" von abstrakt-konkret wirksam ist, da viele bibliothekstechnische, computer-interaktionsbezogene und informationswissenschaftliche Begriffe mit Hilfe konkreterer Konzepte aus besser verstandenen Domänen, z. B. dem Bereich der Nahrungsaufnahme, erklärt werden. Allerdings scheint auch hier der Anteil "entmetaphorisierter", ehemals metaphorischer Ausdrücke hoch zu sein, wie es bei "abgeschliffenen" Ausdrücken auch in der Gemeinsprache der Fall ist. Die Analyse wird abgerundet mit einem Ausblick auf ein Forschungsgebiet, das in dezidierter Weise von der konzeptuellen Ergiebigkeit des Metaphernbegriffs in der Untersuchung der terminologischen Wissenschaftsbeziehungen Gebrauch macht
    Date
    22. 3.2013 14:06:49
  6. Huang, M.-H.; Huang, W.-T.; Chang, C.-C.; Chen, D. Z.; Lin, C.-P.: The greater scattering phenomenon beyond Bradford's law in patent citation (2014) 0.07
    0.066956446 = product of:
      0.13391289 = sum of:
        0.13391289 = sum of:
          0.093614504 = weight(_text_:z in 1352) [ClassicSimilarity], result of:
            0.093614504 = score(doc=1352,freq=2.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.35381722 = fieldWeight in 1352, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.046875 = fieldNorm(doc=1352)
          0.040298395 = weight(_text_:22 in 1352) [ClassicSimilarity], result of:
            0.040298395 = score(doc=1352,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.23214069 = fieldWeight in 1352, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1352)
      0.5 = coord(1/2)
    
    Date
    22. 8.2014 17:11:29
  7. Luca, E.W. de; Dahlberg, I.: ¬Die Multilingual Lexical Linked Data Cloud : eine mögliche Zugangsoptimierung? (2014) 0.07
    0.066956446 = product of:
      0.13391289 = sum of:
        0.13391289 = sum of:
          0.093614504 = weight(_text_:z in 1736) [ClassicSimilarity], result of:
            0.093614504 = score(doc=1736,freq=2.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.35381722 = fieldWeight in 1736, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.046875 = fieldNorm(doc=1736)
          0.040298395 = weight(_text_:22 in 1736) [ClassicSimilarity], result of:
            0.040298395 = score(doc=1736,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.23214069 = fieldWeight in 1736, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1736)
      0.5 = coord(1/2)
    
    Abstract
    Sehr viele Informationen sind bereits im Web verfügbar oder können aus isolierten strukturierten Datenspeichern wie Informationssystemen und sozialen Netzwerken gewonnen werden. Datenintegration durch Nachbearbeitung oder durch Suchmechanismen (z. B. D2R) ist deshalb wichtig, um Informationen allgemein verwendbar zu machen. Semantische Technologien ermöglichen die Verwendung definierter Verbindungen (typisierter Links), durch die ihre Beziehungen zueinander festgehalten werden, was Vorteile für jede Anwendung bietet, die das in Daten enthaltene Wissen wieder verwenden kann. Um ­eine semantische Daten-Landkarte herzustellen, benötigen wir Wissen über die einzelnen Daten und ihre Beziehung zu anderen Daten. Dieser Beitrag stellt unsere Arbeit zur Benutzung von Lexical Linked Data (LLD) durch ein Meta-Modell vor, das alle Ressourcen enthält und zudem die Möglichkeit bietet sie unter unterschiedlichen Gesichtspunkten aufzufinden. Wir verbinden damit bestehende Arbeiten über Wissensgebiete (basierend auf der Information Coding Classification) mit der Multilingual Lexical Linked Data Cloud (basierend auf der RDF/OWL-Repräsentation von EuroWordNet und den ähnlichen integrierten lexikalischen Ressourcen MultiWordNet, MEMODATA und die Hamburg Metapher DB).
    Date
    22. 9.2014 19:00:13
  8. Open Knowledge Foundation: Prinzipien zu offenen bibliographischen Daten (2011) 0.06
    0.0627521 = product of:
      0.1255042 = sum of:
        0.1255042 = sum of:
          0.07801208 = weight(_text_:z in 4399) [ClassicSimilarity], result of:
            0.07801208 = score(doc=4399,freq=2.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.29484767 = fieldWeight in 4399, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4399)
          0.047492117 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
            0.047492117 = score(doc=4399,freq=4.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.27358043 = fieldWeight in 4399, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4399)
      0.5 = coord(1/2)
    
    Content
    "Bibliographische Daten Um den Geltungsbereich der Prinzipien festzulegen, wird in diesem ersten Teil der zugrundeliegende Begriff bibliographischer Daten erläutert. Kerndaten Bibliographische Daten bestehen aus bibliographischen Beschreibungen. Eine bibliographische Beschreibung beschreibt eine bibliographische Ressource (Artikel, Monographie etc. - ob gedruckt oder elektronisch) zum Zwecke 1. der Identifikation der beschriebenen Ressource, d.h. des Zeigens auf eine bestimmte Ressource in der Gesamtheit aller bibliographischer Ressourcen und 2. der Lokalisierung der beschriebenen Ressource, d.h. eines Hinweises, wo die beschriebene Ressource aufzufinden ist. Traditionellerweise erfüllte eine Beschreibung beide Zwecke gleichzeitig, indem sie Information lieferte über: Autor(en) und Herausgeber, Titel, Verlag, Veröffentlichungsdatum und -ort, Identifizierung des übergeordneten Werks (z.B. einer Zeitschrift), Seitenangaben. Im Web findet Identifikation statt mittels Uniform Resource Identifiers (URIs) wie z.B. URNs oder DOIs. Lokalisierung wird ermöglicht durch HTTP-URIs, die auch als Uniform Resource Locators (URLs) bezeichnet werden. Alle URIs für bibliographische Ressourcen fallen folglich unter den engen Begriff bibliographischer Daten. Sekundäre Daten Eine bibliographische Beschreibung kann andere Informationen enthalten, die unter den Begriff bibliographischer Daten fallen, beispielsweise Nicht-Web-Identifikatoren (ISBN, LCCN, OCLC etc.), Angaben zum Urheberrechtsstatus, administrative Daten und mehr; diese Daten können von Bibliotheken, Verlagen, Wissenschaftlern, Online-Communities für Buchliebhaber, sozialen Literaturverwaltungssystemen und Anderen produziert sein. Darüber hinaus produzieren Bibliotheken und verwandte Institutionen kontrollierte Vokabulare zum Zwecke der bibliographischen Beschreibung wie z. B. Personen- und Schlagwortnormdateien, Klassifikationen etc., die ebenfalls unter den Begriff bibliographischer Daten fallen."
    Date
    22. 3.2011 18:22:29
  9. Zhu, Q.; Kong, X.; Hong, S.; Li, J.; He, Z.: Global ontology research progress : a bibliometric analysis (2015) 0.06
    0.0627521 = product of:
      0.1255042 = sum of:
        0.1255042 = sum of:
          0.07801208 = weight(_text_:z in 2590) [ClassicSimilarity], result of:
            0.07801208 = score(doc=2590,freq=2.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.29484767 = fieldWeight in 2590, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2590)
          0.047492117 = weight(_text_:22 in 2590) [ClassicSimilarity], result of:
            0.047492117 = score(doc=2590,freq=4.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.27358043 = fieldWeight in 2590, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2590)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
    17. 9.2018 18:22:23
  10. Castle, C.: Getting the central RDM message across : a case study of central versus discipline-specific Research Data Services (RDS) at the University of Cambridge (2019) 0.06
    0.060516007 = sum of:
      0.043725006 = product of:
        0.17490003 = sum of:
          0.17490003 = weight(_text_:author's in 5491) [ClassicSimilarity], result of:
            0.17490003 = score(doc=5491,freq=4.0), product of:
              0.3331353 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.049572576 = queryNorm
              0.52501196 = fieldWeight in 5491, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5491)
        0.25 = coord(1/4)
      0.016791 = product of:
        0.033582 = sum of:
          0.033582 = weight(_text_:22 in 5491) [ClassicSimilarity], result of:
            0.033582 = score(doc=5491,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.19345059 = fieldWeight in 5491, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5491)
        0.5 = coord(1/2)
    
    Abstract
    RDS are usually cross-disciplinary, centralised services, which are increasingly provided at a university by the academic library and in collaboration with other RDM stakeholders, such as the Research Office. At research-intensive universities, research data is generated in a wide range of disciplines and sub-disciplines. This paper will discuss how providing discipline-specific RDM support is approached by such universities and academic libraries, and the advantages and disadvantages of these central and discipline-specific approaches. A descriptive case study on the author's experiences of collaborating with a central RDS at the University of Cambridge, as a subject librarian embedded in an academic department, is a major component of this paper. The case study describes how centralised RDM services offered by the Office of Scholarly Communication (OSC) have been adapted to meet discipline-specific needs in the Department of Chemistry. It will introduce the department and the OSC, and describe the author's role in delivering RDM training, as well as the Data Champions programme, and their membership of the RDM Project Group. It will describe the outcomes of this collaboration for the Department of Chemistry, and for the centralised service. Centralised and discipline-specific approaches to RDS provision have their own advantages and disadvantages. Supporting the discipline-specific RDM needs of researchers is proving particularly challenging for universities to address sustainably: it requires adequate financial resources and staff skilled (or re-skilled) in RDM. A mixed approach is the most desirable, cost-effective way of providing RDS, but this still has constraints.
    Date
    7. 9.2019 21:30:22
  11. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.06
    0.05905079 = product of:
      0.11810158 = sum of:
        0.11810158 = product of:
          0.47240633 = sum of:
            0.47240633 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.47240633 = score(doc=973,freq=2.0), product of:
                0.42027685 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049572576 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  12. Zhang, Y.: Developing a holistic model for digital library evaluation (2010) 0.06
    0.057251096 = sum of:
      0.0371019 = product of:
        0.1484076 = sum of:
          0.1484076 = weight(_text_:author's in 2360) [ClassicSimilarity], result of:
            0.1484076 = score(doc=2360,freq=2.0), product of:
              0.3331353 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.049572576 = queryNorm
              0.44548744 = fieldWeight in 2360, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.046875 = fieldNorm(doc=2360)
        0.25 = coord(1/4)
      0.020149197 = product of:
        0.040298395 = sum of:
          0.040298395 = weight(_text_:22 in 2360) [ClassicSimilarity], result of:
            0.040298395 = score(doc=2360,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.23214069 = fieldWeight in 2360, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2360)
        0.5 = coord(1/2)
    
    Abstract
    This article reports the author's recent research in developing a holistic model for various levels of digital library (DL) evaluation in which perceived important criteria from heterogeneous stakeholder groups are organized and presented. To develop such a model, the author applied a three-stage research approach: exploration, confirmation, and verification. During the exploration stage, a literature review was conducted followed by an interview, along with a card sorting technique, to collect important criteria perceived by DL experts. Then the criteria identified were used for developing an online survey during the confirmation stage. Survey respondents (431 in total) from 22 countries rated the importance of the criteria. A holistic DL evaluation model was constructed using statistical techniques. Eventually, the verification stage was devised to test the reliability of the model in the context of searching and evaluating an operational DL. The proposed model fills two lacunae in the DL domain: (a) the lack of a comprehensive and flexible framework to guide and benchmark evaluations, and (b) the uncertainty about what divergence exists among heterogeneous DL stakeholders, including general users.
  13. Hjoerland, B.: ¬The importance of theories of knowledge : indexing and information retrieval as an example (2011) 0.06
    0.057251096 = sum of:
      0.0371019 = product of:
        0.1484076 = sum of:
          0.1484076 = weight(_text_:author's in 4359) [ClassicSimilarity], result of:
            0.1484076 = score(doc=4359,freq=2.0), product of:
              0.3331353 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.049572576 = queryNorm
              0.44548744 = fieldWeight in 4359, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.046875 = fieldNorm(doc=4359)
        0.25 = coord(1/4)
      0.020149197 = product of:
        0.040298395 = sum of:
          0.040298395 = weight(_text_:22 in 4359) [ClassicSimilarity], result of:
            0.040298395 = score(doc=4359,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.23214069 = fieldWeight in 4359, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4359)
        0.5 = coord(1/2)
    
    Abstract
    A recent study in information science (IS), raises important issues concerning the value of human indexing and basic theories of indexing and information retrieval, as well as the use of quantitative and qualitative approaches in IS and the underlying theories of knowledge informing the field. The present article uses L&E as the point of departure for demonstrating in what way more social and interpretative understandings may provide fruitful improvements for research in indexing, knowledge organization, and information retrieval. The artcle is motivated by the observation that philosophical contributions tend to be ignored in IS if they are not directly formed as criticisms or invitations to dialogs. It is part of the author's ongoing publication of articles about philosophical issues in IS and it is intended to be followed by analyzes of other examples of contributions to core issues in IS. Although it is formulated as a criticism of a specific paper, it should be seen as part of a general discussion of the philosophical foundation of IS and as a support to the emerging social paradigm in this field.
    Date
    17. 3.2011 19:22:55
  14. Taheri, S.M.; Shahrestani, Z.; Nezhad, M.H.Y.: Switching languages and the national content consortiums : an overview on the challenges of designing an Iranian model (2014) 0.06
    0.05579704 = product of:
      0.11159408 = sum of:
        0.11159408 = sum of:
          0.07801208 = weight(_text_:z in 1447) [ClassicSimilarity], result of:
            0.07801208 = score(doc=1447,freq=2.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.29484767 = fieldWeight in 1447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1447)
          0.033582 = weight(_text_:22 in 1447) [ClassicSimilarity], result of:
            0.033582 = score(doc=1447,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.19345059 = fieldWeight in 1447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1447)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  15. Costas, R.; Zahedi, Z.; Wouters, P.: ¬The thematic orientation of publications mentioned on social media : large-scale disciplinary comparison of social media metrics with citations (2015) 0.06
    0.05579704 = product of:
      0.11159408 = sum of:
        0.11159408 = sum of:
          0.07801208 = weight(_text_:z in 2598) [ClassicSimilarity], result of:
            0.07801208 = score(doc=2598,freq=2.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.29484767 = fieldWeight in 2598, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2598)
          0.033582 = weight(_text_:22 in 2598) [ClassicSimilarity], result of:
            0.033582 = score(doc=2598,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.19345059 = fieldWeight in 2598, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2598)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
  16. Osinska, V.; Kowalska, M.; Osinski, Z.: ¬The role of visualization in the shaping and exploration of the individual information space : part 1 (2018) 0.06
    0.05579704 = product of:
      0.11159408 = sum of:
        0.11159408 = sum of:
          0.07801208 = weight(_text_:z in 4641) [ClassicSimilarity], result of:
            0.07801208 = score(doc=4641,freq=2.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.29484767 = fieldWeight in 4641, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4641)
          0.033582 = weight(_text_:22 in 4641) [ClassicSimilarity], result of:
            0.033582 = score(doc=4641,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.19345059 = fieldWeight in 4641, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4641)
      0.5 = coord(1/2)
    
    Date
    21.12.2018 17:22:13
  17. Jiang, Z.; Gu, Q.; Yin, Y.; Wang, J.; Chen, D.: GRAW+ : a two-view graph propagation method with word coupling for readability assessment (2019) 0.06
    0.05579704 = product of:
      0.11159408 = sum of:
        0.11159408 = sum of:
          0.07801208 = weight(_text_:z in 5218) [ClassicSimilarity], result of:
            0.07801208 = score(doc=5218,freq=2.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.29484767 = fieldWeight in 5218, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5218)
          0.033582 = weight(_text_:22 in 5218) [ClassicSimilarity], result of:
            0.033582 = score(doc=5218,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.19345059 = fieldWeight in 5218, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5218)
      0.5 = coord(1/2)
    
    Date
    15. 4.2019 13:46:22
  18. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.05
    0.049208995 = product of:
      0.09841799 = sum of:
        0.09841799 = product of:
          0.39367196 = sum of:
            0.39367196 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.39367196 = score(doc=1826,freq=2.0), product of:
                0.42027685 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049572576 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  19. ORCID (2015) 0.05
    0.046807252 = product of:
      0.093614504 = sum of:
        0.093614504 = product of:
          0.18722901 = sum of:
            0.18722901 = weight(_text_:z in 1870) [ClassicSimilarity], result of:
              0.18722901 = score(doc=1870,freq=8.0), product of:
                0.26458436 = queryWeight, product of:
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.049572576 = queryNorm
                0.70763445 = fieldWeight in 1870, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1870)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    ORCID soll die elektronische Zuordnung von Publikation und Autoren ermöglichen. Dies ist notwendig, da verschiedene Autoren gleiche Namen haben können, Namen sich ändern (z. B. bei Heirat) und Namen in verschiedenen Publikationen unterschiedlich angegeben werden (z. B. einmal die ausgeschriebenen Vornamen, ein anderes Mal aber nur die Initialen). ORCID soll zum De-facto-Standard für die Autorenidentifikation wissenschaftlicher Publikationen werden. Die Etablierung wird von der Non-Profit-Organisation Open Researcher Contributor Identification Initiative organisiert. Zu den Gründungsmitgliedern der Initiative gehören zahlreiche wissenschaftliche Verlagsgruppen (z. B. Elsevier, Nature Publishing Group, Springer) und Forschungsorganisationen (z. B. EMBO, CERN). Die Planungen für ORCID wurden 2010 auf Umfragen gestützt. ORCID ging am 16. Oktober 2012 offiziell an den Start. Am Jahresende 2012 hatte ORCID 42.918 Registrierte, Jahresende 2013 waren es 460.000 Registrierte und im November 2014 hatte ORCID 1 Million Autorenidentifikationen ausgestellt. Vgl. auch den Zusammenhang mit der GND und den Erfassungsleitfaden der DNB unter: https://wiki.dnb.de/x/vYYGAw.
  20. Hochschule im digitalen Zeitalter : Informationskompetenz neu begreifen - Prozesse anders steuern (2012) 0.04
    0.04463763 = product of:
      0.08927526 = sum of:
        0.08927526 = sum of:
          0.062409665 = weight(_text_:z in 506) [ClassicSimilarity], result of:
            0.062409665 = score(doc=506,freq=2.0), product of:
              0.26458436 = queryWeight, product of:
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.049572576 = queryNorm
              0.23587814 = fieldWeight in 506, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.337313 = idf(docFreq=577, maxDocs=44218)
                0.03125 = fieldNorm(doc=506)
          0.026865598 = weight(_text_:22 in 506) [ClassicSimilarity], result of:
            0.026865598 = score(doc=506,freq=2.0), product of:
              0.17359471 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049572576 = queryNorm
              0.15476047 = fieldWeight in 506, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=506)
      0.5 = coord(1/2)
    
    Abstract
    Zur Stärkung der Informationskompetenz der Studierenden ist es erforderlich, dass entsprechende Lehrangebote ausgebaut, stärker als bisher curricular verankert und möglichst flächendeckend angeboten werden. Die unterschiedlichen, von verschiedenen Akteuren zur Verfügung gestellten Lehrangebote zur Vermittlung von Informationskompetenz sollten mehr als bisher aufeinander abgestimmt und miteinander verschränkt werden. Um die Informationskompetenz aller Lehrenden zu sichern, sollten diese mehr als bisher entsprechende Fortbildungs- und Trainingsangebote wahrnehmen. Die Hochschulleitungen sollten dafür Sorge tragen, dass entsprechende attraktive Angebote zur Verfügung gestellt werden. Auch die Informationskompetenz der Forschenden muss ausgebaut werden, indem entsprechende Qualifizierungsangebote wahrgenommen und Qualifikationsmaßnahmen stärker als bisher z. B. in den Curricula der Graduierten- und Postgraduiertenausbildung verankert werden. Forschende können ihre Informationskompetenz zugleich im Rahmen von Kompetenznetzwerken stärken. Dies gilt es entsprechend seitens der Hochschulleitungen zu unterstützen. Die Hochschulleitungen sollten Strukturen und Prozesse im Rahmen eines hochschulinternen "Governance"-Prozesses verändern können. Innerhalb der Hochschulleitung muss daher eine Person für die Themen "Informationsinfrastruktur" und "Stärkung der Informationskompetenz" verantwortlich und Ansprechpartner sein. Was die Dienstleistungen angeht, wird insbesondere empfohlen, dass die Mitarbeiterinnen und Mitarbeiter der Hochschulbibliotheken und Rechenzentren ihre Kompetenzen erweitern, um die Forscherinnen und Forscher beim Datenmanagement besser unterstützen zu können.
    Date
    8.12.2012 17:22:26

Languages

  • e 566
  • d 218
  • a 1
  • hu 1
  • More… Less…

Types

  • a 689
  • el 65
  • m 52
  • s 16
  • x 15
  • r 12
  • b 5
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications