Search (1063 results, page 2 of 54)

  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. ¬Die deutsche Zeitschrift für Dokumentation, Informationswissenschaft und Informationspraxis von 1950 bis 2011 : eine vorläufige Bilanz in vier Abschnitten (2012) 0.04
    0.037439607 = product of:
      0.074879214 = sum of:
        0.074879214 = sum of:
          0.037876792 = weight(_text_:b in 402) [ClassicSimilarity], result of:
            0.037876792 = score(doc=402,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.23486741 = fieldWeight in 402, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=402)
          0.037002426 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
            0.037002426 = score(doc=402,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.23214069 = fieldWeight in 402, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=402)
      0.5 = coord(1/2)
    
    Date
    22. 7.2012 19:35:26
    Footnote
    Besteht aus 4 Teilen: Teil 1: Eden, D., A. Arndt, A. Hoffer, T. Raschke u. P. Schön: Die Nachrichten für Dokumentation in den Jahren 1950 bis 1962 (S.159-163). Teil 2: Brose, M., E. durst, D. Nitzsche, D. Veckenstedt u. R. Wein: Statistische Untersuchung der Fachzeitschrift "Nachrichten für Dokumentation" (NfD) 1963-1975 (S.164-170). Teil 3: Bösel, J., G. Ebert, P. Garz,, M. Iwanow u. B. Russ: Methoden und Ergebnisse einer statistischen Auswertung der Fachzeitschrift "Nachrichten für Dokumentation" (NfD) 1976 bis 1988 (S.171-174). Teil 4: Engelage, H., S. Jansen, R. Mertins, K. Redel u. S. Ring: Statistische Untersuchung der Fachzeitschrift "Nachrichten für Dokumentation" (NfD) / "Information. Wissenschaft & Praxis" (IWP) 1989-2011 (S.164-170).
  2. Engerer, V.: Metapher und Wissenstransfers im informationsbezogenen Diskurs (2013) 0.04
    0.037439607 = product of:
      0.074879214 = sum of:
        0.074879214 = sum of:
          0.037876792 = weight(_text_:b in 659) [ClassicSimilarity], result of:
            0.037876792 = score(doc=659,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.23486741 = fieldWeight in 659, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=659)
          0.037002426 = weight(_text_:22 in 659) [ClassicSimilarity], result of:
            0.037002426 = score(doc=659,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.23214069 = fieldWeight in 659, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=659)
      0.5 = coord(1/2)
    
    Abstract
    Der vorliegende Beitrag1 ist ein Versuch, Eigenschaften von Schöns generativer Metapher auf das "statische" Resultat als Fachterminologie, entstanden durch Übertragung eines wissenschaftlichen Bereiches in einen anderen, anzuwenden. Metapher gilt in diesem Bereich als wissenstransferierendes Verfahren der Übertragung von Konzepten einer Disziplin auf eine andere. Weiterhin wird Metapher als Teil des sprachlichen Jargons in der informationswissenschaftlichen und bibliothekarischen Praxis thematisiert. Ein kurzer Durchgang des dänischen bibliotheksmetaphorischen Wortschatzes zeigt u. a., dass in dieser Domäne ein "ontologisches Erfahrungsgefälle" von abstrakt-konkret wirksam ist, da viele bibliothekstechnische, computer-interaktionsbezogene und informationswissenschaftliche Begriffe mit Hilfe konkreterer Konzepte aus besser verstandenen Domänen, z. B. dem Bereich der Nahrungsaufnahme, erklärt werden. Allerdings scheint auch hier der Anteil "entmetaphorisierter", ehemals metaphorischer Ausdrücke hoch zu sein, wie es bei "abgeschliffenen" Ausdrücken auch in der Gemeinsprache der Fall ist. Die Analyse wird abgerundet mit einem Ausblick auf ein Forschungsgebiet, das in dezidierter Weise von der konzeptuellen Ergiebigkeit des Metaphernbegriffs in der Untersuchung der terminologischen Wissenschaftsbeziehungen Gebrauch macht
    Date
    22. 3.2013 14:06:49
  3. Kempf, A.O.; Zapilko, B.: Normdatenpflege in Zeiten der Automatisierung : Erstellung und Evaluation automatisch aufgebauter Thesaurus-Crosskonkordanzen (2013) 0.04
    0.037439607 = product of:
      0.074879214 = sum of:
        0.074879214 = sum of:
          0.037876792 = weight(_text_:b in 1021) [ClassicSimilarity], result of:
            0.037876792 = score(doc=1021,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.23486741 = fieldWeight in 1021, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=1021)
          0.037002426 = weight(_text_:22 in 1021) [ClassicSimilarity], result of:
            0.037002426 = score(doc=1021,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.23214069 = fieldWeight in 1021, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1021)
      0.5 = coord(1/2)
    
    Date
    18. 8.2013 12:53:22
  4. Rodríguez Bravo, B.; Travieso Rodríguez, C.; Simões, M.G. de M.; Freitas, M.C.V. de: Evaluating discovery tools in Portuguese and Spanish academic libraries (2014) 0.04
    0.037439607 = product of:
      0.074879214 = sum of:
        0.074879214 = sum of:
          0.037876792 = weight(_text_:b in 1467) [ClassicSimilarity], result of:
            0.037876792 = score(doc=1467,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.23486741 = fieldWeight in 1467, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=1467)
          0.037002426 = weight(_text_:22 in 1467) [ClassicSimilarity], result of:
            0.037002426 = score(doc=1467,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.23214069 = fieldWeight in 1467, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1467)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  5. Luca, E.W. de; Dahlberg, I.: ¬Die Multilingual Lexical Linked Data Cloud : eine mögliche Zugangsoptimierung? (2014) 0.04
    0.037439607 = product of:
      0.074879214 = sum of:
        0.074879214 = sum of:
          0.037876792 = weight(_text_:b in 1736) [ClassicSimilarity], result of:
            0.037876792 = score(doc=1736,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.23486741 = fieldWeight in 1736, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=1736)
          0.037002426 = weight(_text_:22 in 1736) [ClassicSimilarity], result of:
            0.037002426 = score(doc=1736,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.23214069 = fieldWeight in 1736, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1736)
      0.5 = coord(1/2)
    
    Abstract
    Sehr viele Informationen sind bereits im Web verfügbar oder können aus isolierten strukturierten Datenspeichern wie Informationssystemen und sozialen Netzwerken gewonnen werden. Datenintegration durch Nachbearbeitung oder durch Suchmechanismen (z. B. D2R) ist deshalb wichtig, um Informationen allgemein verwendbar zu machen. Semantische Technologien ermöglichen die Verwendung definierter Verbindungen (typisierter Links), durch die ihre Beziehungen zueinander festgehalten werden, was Vorteile für jede Anwendung bietet, die das in Daten enthaltene Wissen wieder verwenden kann. Um ­eine semantische Daten-Landkarte herzustellen, benötigen wir Wissen über die einzelnen Daten und ihre Beziehung zu anderen Daten. Dieser Beitrag stellt unsere Arbeit zur Benutzung von Lexical Linked Data (LLD) durch ein Meta-Modell vor, das alle Ressourcen enthält und zudem die Möglichkeit bietet sie unter unterschiedlichen Gesichtspunkten aufzufinden. Wir verbinden damit bestehende Arbeiten über Wissensgebiete (basierend auf der Information Coding Classification) mit der Multilingual Lexical Linked Data Cloud (basierend auf der RDF/OWL-Repräsentation von EuroWordNet und den ähnlichen integrierten lexikalischen Ressourcen MultiWordNet, MEMODATA und die Hamburg Metapher DB).
    Date
    22. 9.2014 19:00:13
  6. Baga, J.; Hoover, L.; Wolverton, R.E.: Online, practical, and free cataloging resources (2013) 0.04
    0.037439607 = product of:
      0.074879214 = sum of:
        0.074879214 = sum of:
          0.037876792 = weight(_text_:b in 2603) [ClassicSimilarity], result of:
            0.037876792 = score(doc=2603,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.23486741 = fieldWeight in 2603, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=2603)
          0.037002426 = weight(_text_:22 in 2603) [ClassicSimilarity], result of:
            0.037002426 = score(doc=2603,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.23214069 = fieldWeight in 2603, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2603)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22
    Type
    b
  7. Martin, K.E.; Mundle, K.: Positioning libraries for a new bibliographic universe (2014) 0.04
    0.037439607 = product of:
      0.074879214 = sum of:
        0.074879214 = sum of:
          0.037876792 = weight(_text_:b in 2608) [ClassicSimilarity], result of:
            0.037876792 = score(doc=2608,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.23486741 = fieldWeight in 2608, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=2608)
          0.037002426 = weight(_text_:22 in 2608) [ClassicSimilarity], result of:
            0.037002426 = score(doc=2608,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.23214069 = fieldWeight in 2608, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2608)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22
    Type
    b
  8. Erickson, L.B.; Wisniewski, P.; Xu, H.; Carroll, J.M.; Rosson, M.B.; Perkins, D.F.: ¬The boundaries between : parental involvement in a teen's online world (2016) 0.04
    0.037439607 = product of:
      0.074879214 = sum of:
        0.074879214 = sum of:
          0.037876792 = weight(_text_:b in 2932) [ClassicSimilarity], result of:
            0.037876792 = score(doc=2932,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.23486741 = fieldWeight in 2932, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=2932)
          0.037002426 = weight(_text_:22 in 2932) [ClassicSimilarity], result of:
            0.037002426 = score(doc=2932,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.23214069 = fieldWeight in 2932, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2932)
      0.5 = coord(1/2)
    
    Abstract
    The increasing popularity of the Internet and social media is creating new and unique challenges for parents and adolescents regarding the boundaries between parental control and adolescent autonomy in virtual spaces. Drawing on developmental psychology and Communication Privacy Management (CPM) theory, we conduct a qualitative study to examine the challenge between parental concern for adolescent online safety and teens' desire to independently regulate their own online experiences. Analysis of 12 parent-teen pairs revealed five distinct challenges: (a) increased teen autonomy and decreased parental control resulting from teens' direct and unmediated access to virtual spaces, (b) the shift in power to teens who are often more knowledgeable about online spaces and technology, (c) the use of physical boundaries by parents as a means to control virtual spaces, (d) an increase in indirect boundary control strategies such as covert monitoring, and (e) the blurring of lines in virtual spaces between parents' teens and teens' friends.
    Date
    7. 5.2016 20:05:22
  9. Pertile, S. de L.; Moreira, V.P.: Comparing and combining content- and citation-based approaches for plagiarism detection (2016) 0.04
    0.037439607 = product of:
      0.074879214 = sum of:
        0.074879214 = sum of:
          0.037876792 = weight(_text_:b in 3123) [ClassicSimilarity], result of:
            0.037876792 = score(doc=3123,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.23486741 = fieldWeight in 3123, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=3123)
          0.037002426 = weight(_text_:22 in 3123) [ClassicSimilarity], result of:
            0.037002426 = score(doc=3123,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.23214069 = fieldWeight in 3123, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3123)
      0.5 = coord(1/2)
    
    Abstract
    The vast amount of scientific publications available online makes it easier for students and researchers to reuse text from other authors and makes it harder for checking the originality of a given text. Reusing text without crediting the original authors is considered plagiarism. A number of studies have reported the prevalence of plagiarism in academia. As a consequence, numerous institutions and researchers are dedicated to devising systems to automate the process of checking for plagiarism. This work focuses on the problem of detecting text reuse in scientific papers. The contributions of this paper are twofold: (a) we survey the existing approaches for plagiarism detection based on content, based on content and structure, and based on citations and references; and (b) we compare content and citation-based approaches with the goal of evaluating whether they are complementary and if their combination can improve the quality of the detection. We carry out experiments with real data sets of scientific papers and concluded that a combination of the methods can be beneficial.
    Date
    20. 9.2016 19:51:22
  10. Zanibbi, R.; Yuan, B.: Keyword and image-based retrieval for mathematical expressions (2011) 0.04
    0.037439607 = product of:
      0.074879214 = sum of:
        0.074879214 = sum of:
          0.037876792 = weight(_text_:b in 3449) [ClassicSimilarity], result of:
            0.037876792 = score(doc=3449,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.23486741 = fieldWeight in 3449, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
          0.037002426 = weight(_text_:22 in 3449) [ClassicSimilarity], result of:
            0.037002426 = score(doc=3449,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.23214069 = fieldWeight in 3449, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
      0.5 = coord(1/2)
    
    Date
    22. 2.2017 12:53:49
  11. Tang, X.-B.; Liu, G.-C.; Yang, J.; Wei, W.: Knowledge-based financial statement fraud detection system : based on an ontology and a decision tree (2018) 0.04
    0.037439607 = product of:
      0.074879214 = sum of:
        0.074879214 = sum of:
          0.037876792 = weight(_text_:b in 4306) [ClassicSimilarity], result of:
            0.037876792 = score(doc=4306,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.23486741 = fieldWeight in 4306, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=4306)
          0.037002426 = weight(_text_:22 in 4306) [ClassicSimilarity], result of:
            0.037002426 = score(doc=4306,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.23214069 = fieldWeight in 4306, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4306)
      0.5 = coord(1/2)
    
    Date
    21. 6.2018 10:22:43
  12. Tang, X.; Chen, L.; Cui, J.; Wei, B.: Knowledge representation learning with entity descriptions, hierarchical types, and textual relations (2019) 0.04
    0.037439607 = product of:
      0.074879214 = sum of:
        0.074879214 = sum of:
          0.037876792 = weight(_text_:b in 5101) [ClassicSimilarity], result of:
            0.037876792 = score(doc=5101,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.23486741 = fieldWeight in 5101, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=5101)
          0.037002426 = weight(_text_:22 in 5101) [ClassicSimilarity], result of:
            0.037002426 = score(doc=5101,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.23214069 = fieldWeight in 5101, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5101)
      0.5 = coord(1/2)
    
    Date
    17. 3.2019 13:22:53
  13. Lercher, A.: Efficiency of scientific communication : a survey of world science (2010) 0.03
    0.031199675 = product of:
      0.06239935 = sum of:
        0.06239935 = sum of:
          0.031563994 = weight(_text_:b in 3997) [ClassicSimilarity], result of:
            0.031563994 = score(doc=3997,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.19572285 = fieldWeight in 3997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3997)
          0.030835358 = weight(_text_:22 in 3997) [ClassicSimilarity], result of:
            0.030835358 = score(doc=3997,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.19345059 = fieldWeight in 3997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3997)
      0.5 = coord(1/2)
    
    Abstract
    The aim of this study was to measure the efficiency of the system by which scientists worldwide communicate results to each other, providing one measure of the degree to which the system, including all media, functions well. A randomly selected and representative sample of 246 active research scientists worldwide was surveyed. The main measure was the reported rate of "late finds": scientific literature that would have been useful to scientists' projects if it had been found at the beginning of these projects. The main result was that 46% of the sample reported late finds (±6.25%, p0.05). Among respondents from European Union countries or other countries classified as "high income" by the World Bank, 42% reported late finds. Among respondents from low- and middle-income countries, 56% reported late finds. The 42% rate in high-income countries in 2009 can be compared with results of earlier surveys by Martyn (1964a, b, 1987). These earlier surveys found a rate of 22% late finds in 1963-1964 and a rate of 27% in 1985-1986. Respondents were also queried about search habits, but this study failed to support any explanations for this increase in the rate of late finds. This study also permits a crude estimate of the cost in time and money of the increase in late finds.
  14. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.03
    0.031199675 = product of:
      0.06239935 = sum of:
        0.06239935 = sum of:
          0.031563994 = weight(_text_:b in 4186) [ClassicSimilarity], result of:
            0.031563994 = score(doc=4186,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.19572285 = fieldWeight in 4186, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4186)
          0.030835358 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
            0.030835358 = score(doc=4186,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.19345059 = fieldWeight in 4186, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4186)
      0.5 = coord(1/2)
    
    Abstract
    The Impact Factors (IFs) of the Institute for Scientific Information suffer from a number of drawbacks, among them the statistics-Why should one use the mean and not the median?-and the incomparability among fields of science because of systematic differences in citation behavior among fields. Can these drawbacks be counteracted by fractionally counting citation weights instead of using whole numbers in the numerators? (a) Fractional citation counts are normalized in terms of the citing sources and thus would take into account differences in citation behavior among fields of science. (b) Differences in the resulting distributions can be tested statistically for their significance at different levels of aggregation. (c) Fractional counting can be generalized to any document set including journals or groups of journals, and thus the significance of differences among both small and large sets can be tested. A list of fractionally counted IFs for 2008 is available online at http:www.leydesdorff.net/weighted_if/weighted_if.xls The between-group variance among the 13 fields of science identified in the U.S. Science and Engineering Indicators is no longer statistically significant after this normalization. Although citation behavior differs largely between disciplines, the reflection of these differences in fractionally counted citation distributions can not be used as a reliable instrument for the classification.
    Date
    22. 1.2011 12:51:07
  15. Stvilia, B.; Hinnant, C.C.; Schindler, K.; Worrall, A.; Burnett, G.; Burnett, K.; Kazmer, M.M.; Marty, P.F.: Composition of scientific teams and publication productivity at a national science lab (2011) 0.03
    0.031199675 = product of:
      0.06239935 = sum of:
        0.06239935 = sum of:
          0.031563994 = weight(_text_:b in 4191) [ClassicSimilarity], result of:
            0.031563994 = score(doc=4191,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.19572285 = fieldWeight in 4191, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4191)
          0.030835358 = weight(_text_:22 in 4191) [ClassicSimilarity], result of:
            0.030835358 = score(doc=4191,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.19345059 = fieldWeight in 4191, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4191)
      0.5 = coord(1/2)
    
    Date
    22. 1.2011 13:19:42
  16. Hjoerland, B.: User-based and cognitive approaches to knowledge organization : a theoretical analysis of the research literature (2013) 0.03
    0.031199675 = product of:
      0.06239935 = sum of:
        0.06239935 = sum of:
          0.031563994 = weight(_text_:b in 629) [ClassicSimilarity], result of:
            0.031563994 = score(doc=629,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.19572285 = fieldWeight in 629, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.0390625 = fieldNorm(doc=629)
          0.030835358 = weight(_text_:22 in 629) [ClassicSimilarity], result of:
            0.030835358 = score(doc=629,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.19345059 = fieldWeight in 629, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=629)
      0.5 = coord(1/2)
    
    Date
    22. 2.2013 11:49:13
  17. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.03
    0.031199675 = product of:
      0.06239935 = sum of:
        0.06239935 = sum of:
          0.031563994 = weight(_text_:b in 633) [ClassicSimilarity], result of:
            0.031563994 = score(doc=633,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.19572285 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
          0.030835358 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
            0.030835358 = score(doc=633,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.19345059 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
      0.5 = coord(1/2)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
  18. Smiraglia, R.P.: ISKO 12's bookshelf - evolving intension : an editorial (2013) 0.03
    0.031199675 = product of:
      0.06239935 = sum of:
        0.06239935 = sum of:
          0.031563994 = weight(_text_:b in 636) [ClassicSimilarity], result of:
            0.031563994 = score(doc=636,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.19572285 = fieldWeight in 636, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.0390625 = fieldNorm(doc=636)
          0.030835358 = weight(_text_:22 in 636) [ClassicSimilarity], result of:
            0.030835358 = score(doc=636,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.19345059 = fieldWeight in 636, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=636)
      0.5 = coord(1/2)
    
    Abstract
    The 2012 biennial international research conference of the International Society for Knowledge Organization was held August 6-9, in Mysore, India. It was the second international ISKO conference to be held in India (Canada and India are the only countries to have hosted two international ISKO conferences), and for many attendees travel to the exotic Indian subcontinent was a new experience. Interestingly, the mix of people attending was quite different from recent meetings held in Europe or North America. The conference was lively and, as usual, jam-packed with new research. Registration took place on a veranda in the garden of the B. N. Bahadur Institute of Management Sciences where the meetings were held at the University of Mysore. This graceful tree (Figure 1) kept us company and kept watch over our considerations (as indeed it does over the academic enterprise of the Institute). The conference theme was "Categories, Contexts and Relations in Knowledge Organization." The opening and closing sessions fittingly were devoted to serious introspection about the direction of the domain of knowledge organization. This editorial, in line with those following past international conferences, is an attempt to comment on the state of the domain by reflecting domain-analytically on the proceedings of the conference, primarily using bibliometric measures. In general, it seems the domain is secure in its intellectual moorings, as it continues to welcome a broad granular array of shifting research questionsin its intension. It seems that the continual concretizing of the theoretical core of knowledge organization (KO) seems to act as a catalyst for emergent ideas, which can be observed as part of the evolving intension of the domain.
    Date
    22. 2.2013 11:43:34
  19. He, R.; Wang, J.; Tian, J.; Chu, C.-T.; Mauney, B.; Perisic, I.: Session analysis of people search within a professional social network (2013) 0.03
    0.031199675 = product of:
      0.06239935 = sum of:
        0.06239935 = sum of:
          0.031563994 = weight(_text_:b in 743) [ClassicSimilarity], result of:
            0.031563994 = score(doc=743,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.19572285 = fieldWeight in 743, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.0390625 = fieldNorm(doc=743)
          0.030835358 = weight(_text_:22 in 743) [ClassicSimilarity], result of:
            0.030835358 = score(doc=743,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.19345059 = fieldWeight in 743, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=743)
      0.5 = coord(1/2)
    
    Date
    19. 4.2013 20:31:22
  20. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.03
    0.031199675 = product of:
      0.06239935 = sum of:
        0.06239935 = sum of:
          0.031563994 = weight(_text_:b in 1107) [ClassicSimilarity], result of:
            0.031563994 = score(doc=1107,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.19572285 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
          0.030835358 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
            0.030835358 = score(doc=1107,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.19345059 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
      0.5 = coord(1/2)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57

Languages

  • e 798
  • d 258
  • a 1
  • More… Less…

Types

  • el 85
  • b 5
  • More… Less…

Themes