Search (6295 results, page 2 of 315)

  • × language_ss:"e"
  1. He, L.; Nahar, V.: Reuse of scientific data in academic publications : an investigation of Dryad Digital Repository (2016) 0.05
    0.047838215 = product of:
      0.09567643 = sum of:
        0.08013639 = weight(_text_:data in 3072) [ClassicSimilarity], result of:
          0.08013639 = score(doc=3072,freq=20.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.662865 = fieldWeight in 3072, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3072)
        0.015540041 = product of:
          0.031080082 = sum of:
            0.031080082 = weight(_text_:22 in 3072) [ClassicSimilarity], result of:
              0.031080082 = score(doc=3072,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.23214069 = fieldWeight in 3072, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3072)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - In recent years, a large number of data repositories have been built and used. However, the extent to which scientific data are re-used in academic publications is still unknown. The purpose of this paper is to explore the functions of re-used scientific data in scholarly publication in different fields. Design/methodology/approach - To address these questions, the authors identified 827 publications citing resources in the Dryad Digital Repository indexed by Scopus from 2010 to 2015. Findings - The results show that: the number of citations to scientific data increases sharply over the years, but mainly from data-intensive disciplines, such as agricultural, biology science, environment science and medicine; the majority of citations are from the originating articles; and researchers tend to reuse data produced by their own research groups. Research limitations/implications - Dryad data may be re-used without being formally cited. Originality/value - The conservatism in data sharing suggests that more should be done to encourage researchers to re-use other's data.
    Date
    20. 1.2015 18:30:22
  2. Frank, S.: Cataloging digital geographic data in the information infrastructure (1997) 0.05
    0.04769496 = product of:
      0.09538992 = sum of:
        0.05912982 = weight(_text_:data in 3352) [ClassicSimilarity], result of:
          0.05912982 = score(doc=3352,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.48910472 = fieldWeight in 3352, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.109375 = fieldNorm(doc=3352)
        0.0362601 = product of:
          0.0725202 = sum of:
            0.0725202 = weight(_text_:22 in 3352) [ClassicSimilarity], result of:
              0.0725202 = score(doc=3352,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.5416616 = fieldWeight in 3352, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3352)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Encyclopedia of library and information science. Vol.59, [=Suppl.22]
  3. Cronin, B.: Thinking about data (2013) 0.05
    0.04769496 = product of:
      0.09538992 = sum of:
        0.05912982 = weight(_text_:data in 4347) [ClassicSimilarity], result of:
          0.05912982 = score(doc=4347,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.48910472 = fieldWeight in 4347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.109375 = fieldNorm(doc=4347)
        0.0362601 = product of:
          0.0725202 = sum of:
            0.0725202 = weight(_text_:22 in 4347) [ClassicSimilarity], result of:
              0.0725202 = score(doc=4347,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.5416616 = fieldWeight in 4347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4347)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 3.2013 16:18:36
  4. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.05
    0.04769496 = product of:
      0.09538992 = sum of:
        0.05912982 = weight(_text_:data in 4577) [ClassicSimilarity], result of:
          0.05912982 = score(doc=4577,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.48910472 = fieldWeight in 4577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.109375 = fieldNorm(doc=4577)
        0.0362601 = product of:
          0.0725202 = sum of:
            0.0725202 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.0725202 = score(doc=4577,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    2. 4.2000 18:01:22
    Theme
    Data Mining
  5. Riehm, U.; Böhle, K.; Wingert, B.; Gabel-Becker, I.; Loeben, M.: Autoren, Verlage, Nutzer : Elektronisches Publizieren in der Bundesrepublik Deutschland (1989) 0.05
    0.04769131 = product of:
      0.19076525 = sum of:
        0.19076525 = weight(_text_:becker in 5914) [ClassicSimilarity], result of:
          0.19076525 = score(doc=5914,freq=2.0), product of:
            0.25693014 = queryWeight, product of:
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03823278 = queryNorm
            0.742479 = fieldWeight in 5914, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.078125 = fieldNorm(doc=5914)
      0.25 = coord(1/4)
    
  6. Swartzberg, T.: Identifying and spreading expertise : The knowledge manager's brief: to disseminate a company's data and the know-how of its staff (1999) 0.05
    0.047318287 = product of:
      0.094636574 = sum of:
        0.0506827 = weight(_text_:data in 4179) [ClassicSimilarity], result of:
          0.0506827 = score(doc=4179,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.4192326 = fieldWeight in 4179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=4179)
        0.043953877 = product of:
          0.087907754 = sum of:
            0.087907754 = weight(_text_:22 in 4179) [ClassicSimilarity], result of:
              0.087907754 = score(doc=4179,freq=4.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.6565931 = fieldWeight in 4179, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4179)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    29.11.1999 12:18:22
    Source
    International Herald Tribune. 15. Nov. 1999, S.22
  7. Computer - Neue Medien - Elektronisches Publizieren (1993) 0.05
    0.046600167 = product of:
      0.09320033 = sum of:
        0.016894234 = weight(_text_:data in 5906) [ClassicSimilarity], result of:
          0.016894234 = score(doc=5906,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.1397442 = fieldWeight in 5906, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=5906)
        0.0763061 = weight(_text_:becker in 5906) [ClassicSimilarity], result of:
          0.0763061 = score(doc=5906,freq=2.0), product of:
            0.25693014 = queryWeight, product of:
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03823278 = queryNorm
            0.29699162 = fieldWeight in 5906, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03125 = fieldNorm(doc=5906)
      0.5 = coord(2/4)
    
    Content
    Enthält u.a. die folgenden Beiträge: RIESENHUBER, H.: Am Ende steht das Wort: Kultur und Technik als Verbündete - ein Plädoyer; STEIDEL, M.: Von Bindestrich-Informatik bis Chaostheorie (Hüthig); HEMPELMANN, G.: Laudatio für das Arbeitsbuch (Markt & Technik); GÖTZ, B.: Voll daneben: sind Computerbücher noch immer anwendergerecht?; SCHOLZ, H.-W.: Das Buch lernt sprechen, singen und tanzen (Langenscheidt); STUMPF, P.: Der Laptop als Gourmet-Führer (Rossipaul); BURNELEIT, H.-D.: Wer zu früh kommt, den bestraft der Markt (C.H. Beck); KEMP, A. de: Erzfeind oder Kumpel: das ist nicht die Frage (Springer); SCHOLZ, I.: Alles digital (Elektronisches Publizieren); MERTENS, E.: Wichtig ist die Einführung beim Kunden (Olms); SCHRÖDER, M.: Database publishing; GRUNDMANN, U.: Champagner von der CD (EMS/Econ); PRIBILLA, P.: Any to any (Siemens); HEKER, H.: Rechtsfragen der elektronischen Textkommunikation; PLENZ, R.: Verlegen mit Äpfeln und Quark (DTP); PLENZ, R.: Typographische Qualifikation entscheidet (DTP); LIEDER, R.: Cover mit der Maus (Sybex); STYRNOL, H.: Kompetenz schlägt heiße Nadel; KAETZ, R.: Akzente mit Butterfly (Laden-Präsentation); RINKA, M.: Flankierende Maßnahmen Zeitschriften; ZEBOLD, P.: Tools für den Verkauf (Zeitschriften); STEINBECK, P.: Lose-Disketten-Werk; STEINHAUS, I.: Man trägt Diskette; BORISCH, M.: Kompetenter Partner auch für fun und action; KESSLER, C.: Schneller schlau (Wissenssoftware, MSPI); KRAPP, S.: Computer am Dienstag (CAD), Chaos am Mittwoch (CAM), oder: wieviel EDV braucht der Azubi?; STEINBRINK, B.: Multimedia: Standards für die Verlagswelt (Markt & Technik); MONDEL, N.: Der Krieg der Systeme findet nicht statt (Tewi); FERCHL, I.: Online in den Markt (Springer); FERCHL, I.: Nicht hurtig, HÜthig; BLAHACEK, R.: Alle Stückerln (Erb-Verl.); MENZEL, M.: Porsche oder Goggo (Rossipaul); MENZEL, M.: Sharebären und MS-Dosen (Systhema); MENZEL, M.: Populär, aber nicht platt (Tewi); MENZEL, M.: Von Funk zu Fuzzy (Franzis); GRUNDMANN, U.: Aktive Lebenshilfe: und das möglichst preisgünstig (Data-Becker); GRUNDMANN, U.: Die roten Dreiecke bleiben sich treu (Addison-Wesley); GRUNDMANN, U.: Große Bücher für wenig Geld (BHV); GRUNDMANN, U.: ... nämlich ein Dos-Buch genauso zu vermarkten wie 'Scarlet' (Sybex); MENZEL, M.: Langsam einsickern (dtv/Beck); SCHMITZ, A.: Le style c'est l'homme (Rowohlt); SCHINZEL, W.H.: CD-ROM: eine Erfolgsstory; QUEISSER, M.: Kataloge auf der Silberscheibe; SOMMERFELD, B.: Ran an Eunet; LESSMAN, F. u. H. KELLER: Online mit KNO; ZAAG, J.: Vorreiter (KNO); SCHÖDER, M.: Arno Schmidts anderer Zettelkasten (Relationale Datenbanken); WIESNER, M.: One world of informations: OSI und EDI; WEIGEL, F.: Intermezzo mit X12, Libe für EDI (Harrassowitz);
  8. Salaba, A.; Zeng, M.L.: Extending the "Explore" user task beyond subject authority data into the linked data sphere (2014) 0.05
    0.045274492 = product of:
      0.090548985 = sum of:
        0.072418936 = weight(_text_:data in 1465) [ClassicSimilarity], result of:
          0.072418936 = score(doc=1465,freq=12.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.59902847 = fieldWeight in 1465, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1465)
        0.01813005 = product of:
          0.0362601 = sum of:
            0.0362601 = weight(_text_:22 in 1465) [ClassicSimilarity], result of:
              0.0362601 = score(doc=1465,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.2708308 = fieldWeight in 1465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1465)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    "Explore" is a user task introduced in the Functional Requirements for Subject Authority Data (FRSAD) final report. Through various case scenarios, the authors discuss how structured data, presented based on Linked Data principles and using knowledge organisation systems (KOS) as the backbone, extend the explore task within and beyond subject authority data.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  9. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.05
    0.045274492 = product of:
      0.090548985 = sum of:
        0.072418936 = weight(_text_:data in 997) [ClassicSimilarity], result of:
          0.072418936 = score(doc=997,freq=12.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.59902847 = fieldWeight in 997, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=997)
        0.01813005 = product of:
          0.0362601 = sum of:
            0.0362601 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
              0.0362601 = score(doc=997,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.2708308 = fieldWeight in 997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=997)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In recent years, cultural heritage institutions have been exploring the benefits of applying Linked Open Data to their catalogs and digital materials. Innovative and creative methods have emerged to publish and reuse digital contents to promote computational access, such as the concepts of Labs and Collections as Data. Data quality has become a requirement for researchers and training methods based on artificial intelligence and machine learning. This article explores how the quality of Linked Open Data made available by cultural heritage institutions can be automatically assessed. The results obtained can be useful for other institutions who wish to publish and assess their collections.
    Date
    22. 6.2023 18:23:31
  10. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.04
    0.04454566 = product of:
      0.08909132 = sum of:
        0.07614129 = weight(_text_:data in 106) [ClassicSimilarity], result of:
          0.07614129 = score(doc=106,freq=26.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.6298187 = fieldWeight in 106, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=106)
        0.012950035 = product of:
          0.02590007 = sum of:
            0.02590007 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.02590007 = score(doc=106,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
  11. Vogt, F.; Wille, R.: TOSCANA - a graphical tool for analyzing and exploring data (1995) 0.04
    0.044148497 = product of:
      0.088296995 = sum of:
        0.06757694 = weight(_text_:data in 1901) [ClassicSimilarity], result of:
          0.06757694 = score(doc=1901,freq=8.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.5589768 = fieldWeight in 1901, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=1901)
        0.020720055 = product of:
          0.04144011 = sum of:
            0.04144011 = weight(_text_:22 in 1901) [ClassicSimilarity], result of:
              0.04144011 = score(doc=1901,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.30952093 = fieldWeight in 1901, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1901)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    TOSCANA is a computer program which allows an online interaction with larger data bases to analyse and explore data conceptually. It uses labelled line diagrams of concept lattices to communicate knowledge coded in given data. The basic problem to create online presentations of concept lattices is solved by composing prepared diagrams to nested line diagrams. A larger number of applications in different areas have already shown that TOSCANA is a useful tool for many purposes
    Source
    Knowledge organization. 22(1995) no.2, S.78-81
  12. Palsdottir, A.: Data literacy and management of research data : a prerequisite for the sharing of research data (2021) 0.04
    0.043889567 = product of:
      0.087779135 = sum of:
        0.07741911 = weight(_text_:data in 183) [ClassicSimilarity], result of:
          0.07741911 = score(doc=183,freq=42.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.6403884 = fieldWeight in 183, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=183)
        0.010360028 = product of:
          0.020720055 = sum of:
            0.020720055 = weight(_text_:22 in 183) [ClassicSimilarity], result of:
              0.020720055 = score(doc=183,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.15476047 = fieldWeight in 183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=183)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The purpose of this paper is to investigate the knowledge and attitude about research data management, the use of data management methods and the perceived need for support, in relation to participants' field of research. Design/methodology/approach This is a quantitative study. Data were collected by an email survey and sent to 792 academic researchers and doctoral students. Total response rate was 18% (N = 139). The measurement instrument consisted of six sets of questions: about data management plans, the assignment of additional information to research data, about metadata, standard file naming systems, training at data management methods and the storing of research data. Findings The main finding is that knowledge about the procedures of data management is limited, and data management is not a normal practice in the researcher's work. They were, however, in general, of the opinion that the university should take the lead by recommending and offering access to the necessary tools of data management. Taken together, the results indicate that there is an urgent need to increase the researcher's understanding of the importance of data management that is based on professional knowledge and to provide them with resources and training that enables them to make effective and productive use of data management methods. Research limitations/implications The survey was sent to all members of the population but not a sample of it. Because of the response rate, the results cannot be generalized to all researchers at the university. Nevertheless, the findings may provide an important understanding about their research data procedures, in particular what characterizes their knowledge about data management and attitude towards it. Practical implications Awareness of these issues is essential for information specialists at academic libraries, together with other units within the universities, to be able to design infrastructures and develop services that suit the needs of the research community. The findings can be used, to develop data policies and services, based on professional knowledge of best practices and recognized standards that assist the research community at data management. Originality/value The study contributes to the existing literature about research data management by examining the results by participants' field of research. Recognition of the issues is critical in order for information specialists in collaboration with universities to design relevant infrastructures and services for academics and doctoral students that can promote their research data management.
    Date
    20. 1.2015 18:30:22
  13. Koch, T.; Neuroth, H.; Day, M.: Renardus: Cross-browsing European subject gateways via a common classification system (DDC) (2003) 0.04
    0.04383669 = product of:
      0.08767338 = sum of:
        0.020905549 = weight(_text_:data in 3821) [ClassicSimilarity], result of:
          0.020905549 = score(doc=3821,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.17292464 = fieldWeight in 3821, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3821)
        0.066767834 = weight(_text_:becker in 3821) [ClassicSimilarity], result of:
          0.066767834 = score(doc=3821,freq=2.0), product of:
            0.25693014 = queryWeight, product of:
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03823278 = queryNorm
            0.25986767 = fieldWeight in 3821, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3821)
      0.5 = coord(2/4)
    
    Content
    "1. The EU projeet Renardus Renardus is a project funded by the European Commission as part of the Information Society Technologies (IST) programme, part of the European Union's 5th Framework Programme. Partners in Renardus include national libraries, research centres and subject gateway services from Denmark, Finland, Germany, The Netherlands, Sweden and the UK, co-ordinated by the National Library of the Netherlands. The project aims to develop a Web-based service to enable searching and browsing across a range of distributed European-based information services designed for the academic and research communities - and in particular those services known as subject gateways. These gateways are services that provide access to Internet resources. They tend to be selective with regard to the resources they give access to, and are usually based an the manual creation of descriptive metadata. Services typically provide users with both search and browse facilities, and offen offer hierarchical browse structures based an subject classification schemes (Koch & Day, 1997). Predecessor projects like the EU project DESIRE have already developed solutions for the description of individual resources and for automatic classification at the level of an individual subject gateway using established classification systems. Renardus intends to develop a service that can cross-search and cross-browse a number of distributed subject gateways through the use of a common metadata profile and by the mapping all locally-used classification schemes to a common scheme. A thorough review of existing data models (Becker, et al., 2000) was used as the basis for the agreement of a minimum set of Dublin Core-based metadata elements that could be utilised as a common data model. A comprehensive mapping effort from the individual gateways' metadata element sets and content encoding schemes to the common profile has taken place. This provides the infrastructure for interoperability between all participating databases and thus is the necessary prerequisite for cross-searching."
  14. Kreider, J.: ¬The correlation of local citation data with citation data from Journal Citation Reports (1999) 0.04
    0.043608103 = product of:
      0.087216206 = sum of:
        0.071676165 = weight(_text_:data in 102) [ClassicSimilarity], result of:
          0.071676165 = score(doc=102,freq=16.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.5928845 = fieldWeight in 102, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=102)
        0.015540041 = product of:
          0.031080082 = sum of:
            0.031080082 = weight(_text_:22 in 102) [ClassicSimilarity], result of:
              0.031080082 = score(doc=102,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.23214069 = fieldWeight in 102, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=102)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    University librarians continue to face the difficult task of determining which journals remain crucial for their collections during these times of static financial resources and escalating journal costs. One evaluative tool, Journal Citation Reports (JCR), recently has become available on CD-ROM, making it simpler for librarians to use its citation data as input for ranking journals. But many librarians remain unconvinced that the global citation data from the JCR bears enough correspondence to their local situation to be useful. In this project, I explore the correlation between global citation data available from JCR with local citation data generated specifically for the University of British Columbia, for 20 subject fields in the sciences and social sciences. The significant correlations obtained in this study suggest that large research-oriented university libraries could consider substituting global citation data for local citation data when evaluating their journals, with certain cautions.
    Date
    10. 9.2000 17:38:22
  15. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.04
    0.043052107 = product of:
      0.086104214 = sum of:
        0.07315418 = weight(_text_:data in 1605) [ClassicSimilarity], result of:
          0.07315418 = score(doc=1605,freq=24.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.60511017 = fieldWeight in 1605, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1605)
        0.012950035 = product of:
          0.02590007 = sum of:
            0.02590007 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.02590007 = score(doc=1605,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Numerous studies have explored the possibility of uncovering information from web search queries but few have examined the factors that affect web query data sources. We conducted a study that investigated this issue by comparing Google Trends and Baidu Index. Data from these two services are based on queries entered by users into Google and Baidu, two of the largest search engines in the world. We first compared the features and functions of the two services based on documents and extensive testing. We then carried out an empirical study that collected query volume data from the two sources. We found that data from both sources could be used to predict the quality of Chinese universities and companies. Despite the differences between the two services in terms of technology, such as differing methods of language processing, the search volume data from the two were highly correlated and combining the two data sources did not improve the predictive power of the data. However, there was a major difference between the two in terms of data availability. Baidu Index was able to provide more search volume data than Google Trends did. Our analysis showed that the disadvantage of Google Trends in this regard was due to Google's smaller user base in China. The implication of this finding goes beyond China. Google's user bases in many countries are smaller than that in China, so the search volume data related to those countries could result in the same issue as that related to China.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
    Theme
    Data Mining
  16. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.04
    0.043052107 = product of:
      0.086104214 = sum of:
        0.07315418 = weight(_text_:data in 5011) [ClassicSimilarity], result of:
          0.07315418 = score(doc=5011,freq=24.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.60511017 = fieldWeight in 5011, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5011)
        0.012950035 = product of:
          0.02590007 = sum of:
            0.02590007 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
              0.02590007 = score(doc=5011,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.19345059 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The present challenge faced by scientists working with Big Data comes in the overwhelming volume and level of detail provided by current data sets. Exceeding traditional empirical approaches, Big Data opens a new perspective on scientific work in which data comes to play a role in the development of the scientific problematic to be developed. Addressing this reconfiguration of our relationship with data through readings of Wittgenstein, Macherey, and Popper, we propose a picture of science that encourages scientists to engage with the data in a direct way, using the data itself as an instrument for scientific investigation. Using GIS as a theme, we develop the concept of cyber-human systems of thought and understanding to bridge the divide between representative (theoretical) thinking and (non-theoretical) data-driven science. At the foundation of these systems, we invoke the concept of the "semantic pixel" to establish a logical and virtual space linking data and the work of scientists. It is with this discussion of the relationship between analysts in their pursuit of knowledge and the rise of Big Data that this present discussion of the philosophical foundations of Big Data addresses the central questions raised by social informatics research.
    Date
    7. 3.2019 16:32:22
    Theme
    Data Mining
  17. Jackson, P.: ¬A thesaurus for enhanced geographic access (1991) 0.04
    0.042815104 = product of:
      0.08563021 = sum of:
        0.05973014 = weight(_text_:data in 2298) [ClassicSimilarity], result of:
          0.05973014 = score(doc=2298,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.49407038 = fieldWeight in 2298, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=2298)
        0.02590007 = product of:
          0.05180014 = sum of:
            0.05180014 = weight(_text_:22 in 2298) [ClassicSimilarity], result of:
              0.05180014 = score(doc=2298,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.38690117 = fieldWeight in 2298, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2298)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Argues that geographic access in on-line catalogues could be improved by organising and structuring geographic data through the hierarchies of a thesaurus. The proposed thesaurus, based on geographic area codes and the DDC area tables, would incorporate geographic coordinate data, and would allow catalogue users to expand or refine a search by defined ares
    Source
    LASIE. 22(1991) no.3, S.49-60
  18. Griffith, C.: What's all the hype about hypertext? (1989) 0.04
    0.042815104 = product of:
      0.08563021 = sum of:
        0.05973014 = weight(_text_:data in 2505) [ClassicSimilarity], result of:
          0.05973014 = score(doc=2505,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.49407038 = fieldWeight in 2505, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=2505)
        0.02590007 = product of:
          0.05180014 = sum of:
            0.05180014 = weight(_text_:22 in 2505) [ClassicSimilarity], result of:
              0.05180014 = score(doc=2505,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.38690117 = fieldWeight in 2505, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2505)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Considers the reason why CD-ROM's promise of a large range of legal data bases has, to some extent, been limited. The new range of CD-ROM hypertext data bases, produced by West Publishing Company, are discussed briefly.
    Source
    Information today. 6(1989) no.4, S.22-24
  19. ¬The digital information revolution: [key presentations] : Superhighway symposium, FEI/EURIM Conference, November 16th & 17th 1994 [at the Central Hall, Westminster.] (1995) 0.04
    0.042815104 = product of:
      0.08563021 = sum of:
        0.05973014 = weight(_text_:data in 8) [ClassicSimilarity], result of:
          0.05973014 = score(doc=8,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.49407038 = fieldWeight in 8, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=8)
        0.02590007 = product of:
          0.05180014 = sum of:
            0.05180014 = weight(_text_:22 in 8) [ClassicSimilarity], result of:
              0.05180014 = score(doc=8,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.38690117 = fieldWeight in 8, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22.10.2006 18:22:51
    LCSH
    Electronic data interchange / Congresses
    Subject
    Electronic data interchange / Congresses
  20. Badia, A.: Data, information, knowledge : an information science analysis (2014) 0.04
    0.0421196 = product of:
      0.0842392 = sum of:
        0.06610915 = weight(_text_:data in 1296) [ClassicSimilarity], result of:
          0.06610915 = score(doc=1296,freq=10.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.5468357 = fieldWeight in 1296, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1296)
        0.01813005 = product of:
          0.0362601 = sum of:
            0.0362601 = weight(_text_:22 in 1296) [ClassicSimilarity], result of:
              0.0362601 = score(doc=1296,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.2708308 = fieldWeight in 1296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1296)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    I analyze the text of an article that appeared in this journal in 2007 that published the results of a questionnaire in which a number of experts were asked to define the concepts of data, information, and knowledge. I apply standard information retrieval techniques to build a list of the most frequent terms in each set of definitions. I then apply information extraction techniques to analyze how the top terms are used in the definitions. As a result, I draw data-driven conclusions about the aggregate opinion of the experts. I contrast this with the original analysis of the data to provide readers with an alternative viewpoint on what the data tell us.
    Date
    16. 6.2014 19:22:57

Languages

Types

  • a 5528
  • m 405
  • el 343
  • s 250
  • r 40
  • b 31
  • x 19
  • i 12
  • n 11
  • p 8
  • ? 4
  • l 3
  • d 2
  • h 1
  • More… Less…

Themes

Subjects

Classifications