Search (664 results, page 2 of 34)

  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Scheven, E.: Geokoordinaten in Bibliotheksdaten : Grundlage für innovative Nachnutzung (2015) 0.01
    0.012231203 = product of:
      0.030578006 = sum of:
        0.005448922 = weight(_text_:a in 308) [ClassicSimilarity], result of:
          0.005448922 = score(doc=308,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 308, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=308)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 308) [ClassicSimilarity], result of:
              0.050258167 = score(doc=308,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 308, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=308)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    16.11.2015 18:22:47
    Type
    a
  2. Hartmann, F.: Paul Otlets Hypermedium : Dokumentation als Gegenidee zur Bibliothek (2015) 0.01
    0.012231203 = product of:
      0.030578006 = sum of:
        0.005448922 = weight(_text_:a in 1432) [ClassicSimilarity], result of:
          0.005448922 = score(doc=1432,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 1432, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1432)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 1432) [ClassicSimilarity], result of:
              0.050258167 = score(doc=1432,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 1432, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1432)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 8.2016 15:58:46
    Type
    a
  3. Schleim, S.: Warum die Wissenschaft nicht frei ist (2017) 0.01
    0.012231203 = product of:
      0.030578006 = sum of:
        0.005448922 = weight(_text_:a in 3882) [ClassicSimilarity], result of:
          0.005448922 = score(doc=3882,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 3882, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3882)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 3882) [ClassicSimilarity], result of:
              0.050258167 = score(doc=3882,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 3882, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3882)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    9.10.2017 15:48:22
    Type
    a
  4. Rötzer, F.: Chinesischer Roboter besteht weltweit erstmals Zulassungsprüfung für Mediziner (2017) 0.01
    0.012231203 = product of:
      0.030578006 = sum of:
        0.005448922 = weight(_text_:a in 3978) [ClassicSimilarity], result of:
          0.005448922 = score(doc=3978,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 3978, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3978)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 3978) [ClassicSimilarity], result of:
              0.050258167 = score(doc=3978,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 3978, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3978)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Issue
    [22. November 2017].
    Type
    a
  5. Rötzer, F.: Psychologen für die Künstliche Intelligenz (2018) 0.01
    0.012231203 = product of:
      0.030578006 = sum of:
        0.005448922 = weight(_text_:a in 4427) [ClassicSimilarity], result of:
          0.005448922 = score(doc=4427,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 4427, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4427)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 4427) [ClassicSimilarity], result of:
              0.050258167 = score(doc=4427,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 4427, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4427)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 1.2018 11:08:27
    Type
    a
  6. Schaat, S.: Von der automatisierten Manipulation zur Manipulation der Automatisierung (2019) 0.01
    0.012231203 = product of:
      0.030578006 = sum of:
        0.005448922 = weight(_text_:a in 4996) [ClassicSimilarity], result of:
          0.005448922 = score(doc=4996,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 4996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4996)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 4996) [ClassicSimilarity], result of:
              0.050258167 = score(doc=4996,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 4996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4996)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    19. 2.2019 17:22:00
    Type
    a
  7. Mühlbauer, P.: Upload in Computer klappt . (2018) 0.01
    0.011492259 = product of:
      0.028730646 = sum of:
        0.0067426977 = weight(_text_:a in 4113) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=4113,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 4113, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4113)
        0.021987949 = product of:
          0.043975897 = sum of:
            0.043975897 = weight(_text_:22 in 4113) [ClassicSimilarity], result of:
              0.043975897 = score(doc=4113,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2708308 = fieldWeight in 4113, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4113)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Vgl. auch: URL: http://www.heise.de/-3962785. Vgl. auch: https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnx3d25pcDIwMTd8Z3g6NDQ3YjZhZTZiYWJiNDI5NA. Vgl. auch: Volker Henn, V.: Synthetisches Leben: auf dem Weg zum biologischen Betriebssystem [eBook]. Hannover: Heise Medien 2014. ISBN (epub) 978-3-944099-23-1.
    Date
    12. 2.2018 15:22:19
    Type
    a
  8. Zanibbi, R.; Yuan, B.: Keyword and image-based retrieval for mathematical expressions (2011) 0.01
    0.011193973 = product of:
      0.027984932 = sum of:
        0.009138121 = weight(_text_:a in 3449) [ClassicSimilarity], result of:
          0.009138121 = score(doc=3449,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 3449, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3449)
        0.018846812 = product of:
          0.037693623 = sum of:
            0.037693623 = weight(_text_:22 in 3449) [ClassicSimilarity], result of:
              0.037693623 = score(doc=3449,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23214069 = fieldWeight in 3449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3449)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Two new methods for retrieving mathematical expressions using conventional keyword search and expression images are presented. An expression-level TF-IDF (term frequency-inverse document frequency) approach is used for keyword search, where queries and indexed expressions are represented by keywords taken from LATEX strings. TF-IDF is computed at the level of individual expressions rather than documents to increase the precision of matching. The second retrieval technique is a form of Content-Base Image Retrieval (CBIR). Expressions are segmented into connected components, and then components in the query expression and each expression in the collection are matched using contour and density features, aspect ratios, and relative positions. In an experiment using ten randomly sampled queries from a corpus of over 22,000 expressions, precision-at-k (k= 20) for the keyword-based approach was higher (keyword: µ= 84.0,s= 19.0, image-based:µ= 32.0,s= 30.7), but for a few of the queries better results were obtained using a combination of the two techniques.
    Date
    22. 2.2017 12:53:49
    Type
    a
  9. Raban, D.R.; Rusho, Y.: Value perception of information sources in the context of learning (2018) 0.01
    0.010939863 = product of:
      0.027349656 = sum of:
        0.0076151006 = weight(_text_:a in 5059) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=5059,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 5059, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5059)
        0.019734556 = product of:
          0.03946911 = sum of:
            0.03946911 = weight(_text_:information in 5059) [ClassicSimilarity], result of:
              0.03946911 = score(doc=5059,freq=50.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.48488683 = fieldWeight in 5059, product of:
                  7.071068 = tf(freq=50.0), with freq of:
                    50.0 = termFreq=50.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5059)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Information sources require consumers to use them in order to evaluate their quality, meaning that they are experience goods. The value perceived before acquisition and use may be different from the value obtained by actual use. Understanding the value perception gap is likely to inform more efficient selection of information sources. The current research studies the value gap in a learning situation. We examine information value perceptions before and after experiencing information in an experiment with 113 software engineers engaged in a problem-based learning task while using and evaluating three types of information sources: supportive, reflective and reciprocal. The results indicate that before using an information source, the subjective value for supportive information is lower than for reflective information. In addition, 55% of the participants preferred to obtain information when presented with a choice. After using an information source no correlation was observed between perceived value of information before and after the use of information source (value gap); participants assigned a higher user experience (UX) value to reflective and reciprocal information than to supportive information; positive correlation between UX value and revealed information value; positive correlation between learning achievement and revealed information value; Reciprocal information is associated with higher learning achievement than reflective and supportive; use of information led to higher learning achievement than avoidance of information. Reciprocal information supports high achievement in software engineering informal learning. Reflective information is valued higher than supportive information sources. If supportive information is essential, learning environments designers should invest heavily in interface design combining reciprocal and reflective elements, such as forums and "try it yourself", respectively
    Source
    Open information science. 2(2018) no.1, S.83-101
    Type
    a
  10. Stumpf, G.: "Kerngeschäft" Sacherschließung in neuer Sicht : was gezielte intellektuelle Arbeit und maschinelle Verfahren gemeinsam bewirken können (2015) 0.01
    0.010702303 = product of:
      0.026755756 = sum of:
        0.004767807 = weight(_text_:a in 1703) [ClassicSimilarity], result of:
          0.004767807 = score(doc=1703,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.089176424 = fieldWeight in 1703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1703)
        0.021987949 = product of:
          0.043975897 = sum of:
            0.043975897 = weight(_text_:22 in 1703) [ClassicSimilarity], result of:
              0.043975897 = score(doc=1703,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2708308 = fieldWeight in 1703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1703)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Es handelt sich um den leicht überarbeiteten Text eines Vortrags bei der VDB-Fortbildungsveranstaltung "Wandel als Konstante: neue Aufgaben und Herausforderungen für sozialwissenschaftliche Bibliotheken" am 22./23. Januar 2015 in Berlin.
    Type
    a
  11. Hermsdorf, D.: Zweifel an Franz Hörmanns "Informationsgeld" : https://www.heise.de/tp/features/Zweifel-an-Franz-Hoermanns-Informationsgeld-3730411.html. (2017) 0.01
    0.010702303 = product of:
      0.026755756 = sum of:
        0.004767807 = weight(_text_:a in 3675) [ClassicSimilarity], result of:
          0.004767807 = score(doc=3675,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.089176424 = fieldWeight in 3675, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3675)
        0.021987949 = product of:
          0.043975897 = sum of:
            0.043975897 = weight(_text_:22 in 3675) [ClassicSimilarity], result of:
              0.043975897 = score(doc=3675,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2708308 = fieldWeight in 3675, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3675)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    20. 6.2017 11:55:22
    Type
    a
  12. Pany, T.: Konfusion in der Medienrepublik : Der Überraschungseffekt der Youtuber (2019) 0.01
    0.010702303 = product of:
      0.026755756 = sum of:
        0.004767807 = weight(_text_:a in 5244) [ClassicSimilarity], result of:
          0.004767807 = score(doc=5244,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.089176424 = fieldWeight in 5244, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5244)
        0.021987949 = product of:
          0.043975897 = sum of:
            0.043975897 = weight(_text_:22 in 5244) [ClassicSimilarity], result of:
              0.043975897 = score(doc=5244,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2708308 = fieldWeight in 5244, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5244)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Vgl. auch: Dörner, S.:"CDU-Zerstörer" Rezo: Es kamen "Diskreditierung, Lügen, Trump-Wordings und keine inhaltliche Auseinandersetzung" [22. Mai 2019]. Interview mit Rezo. Unter: https://www.heise.de/tp/features/CDU-Zerstoerer-Rezo-Es-kamen-Diskreditierung-Luegen-Trump-Wordings-und-keine-inhaltliche-4428522.html?view=print [http://www.heise.de/-4428522].
    Type
    a
  13. Open Knowledge Foundation: Prinzipien zu offenen bibliographischen Daten (2011) 0.01
    0.010463237 = product of:
      0.052316185 = sum of:
        0.052316185 = sum of:
          0.007893822 = weight(_text_:information in 4399) [ClassicSimilarity], result of:
            0.007893822 = score(doc=4399,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.09697737 = fieldWeight in 4399, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4399)
          0.044422362 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
            0.044422362 = score(doc=4399,freq=4.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.27358043 = fieldWeight in 4399, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4399)
      0.2 = coord(1/5)
    
    Content
    "Bibliographische Daten Um den Geltungsbereich der Prinzipien festzulegen, wird in diesem ersten Teil der zugrundeliegende Begriff bibliographischer Daten erläutert. Kerndaten Bibliographische Daten bestehen aus bibliographischen Beschreibungen. Eine bibliographische Beschreibung beschreibt eine bibliographische Ressource (Artikel, Monographie etc. - ob gedruckt oder elektronisch) zum Zwecke 1. der Identifikation der beschriebenen Ressource, d.h. des Zeigens auf eine bestimmte Ressource in der Gesamtheit aller bibliographischer Ressourcen und 2. der Lokalisierung der beschriebenen Ressource, d.h. eines Hinweises, wo die beschriebene Ressource aufzufinden ist. Traditionellerweise erfüllte eine Beschreibung beide Zwecke gleichzeitig, indem sie Information lieferte über: Autor(en) und Herausgeber, Titel, Verlag, Veröffentlichungsdatum und -ort, Identifizierung des übergeordneten Werks (z.B. einer Zeitschrift), Seitenangaben. Im Web findet Identifikation statt mittels Uniform Resource Identifiers (URIs) wie z.B. URNs oder DOIs. Lokalisierung wird ermöglicht durch HTTP-URIs, die auch als Uniform Resource Locators (URLs) bezeichnet werden. Alle URIs für bibliographische Ressourcen fallen folglich unter den engen Begriff bibliographischer Daten. Sekundäre Daten Eine bibliographische Beschreibung kann andere Informationen enthalten, die unter den Begriff bibliographischer Daten fallen, beispielsweise Nicht-Web-Identifikatoren (ISBN, LCCN, OCLC etc.), Angaben zum Urheberrechtsstatus, administrative Daten und mehr; diese Daten können von Bibliotheken, Verlagen, Wissenschaftlern, Online-Communities für Buchliebhaber, sozialen Literaturverwaltungssystemen und Anderen produziert sein. Darüber hinaus produzieren Bibliotheken und verwandte Institutionen kontrollierte Vokabulare zum Zwecke der bibliographischen Beschreibung wie z. B. Personen- und Schlagwortnormdateien, Klassifikationen etc., die ebenfalls unter den Begriff bibliographischer Daten fallen."
    Date
    22. 3.2011 18:22:29
  14. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.01
    0.010368963 = product of:
      0.025922406 = sum of:
        0.01021673 = weight(_text_:a in 4550) [ClassicSimilarity], result of:
          0.01021673 = score(doc=4550,freq=18.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19109234 = fieldWeight in 4550, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.015705677 = product of:
          0.031411353 = sum of:
            0.031411353 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.031411353 = score(doc=4550,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
    Type
    a
  15. Gödert, W.; Lepsky, K.: Reception of externalized knowledge : a constructivistic model based on Popper's Three Worlds and Searle's Collective Intentionality (2019) 0.01
    0.009925711 = product of:
      0.024814278 = sum of:
        0.012184162 = weight(_text_:a in 5205) [ClassicSimilarity], result of:
          0.012184162 = score(doc=5205,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22789092 = fieldWeight in 5205, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5205)
        0.012630116 = product of:
          0.025260232 = sum of:
            0.025260232 = weight(_text_:information in 5205) [ClassicSimilarity], result of:
              0.025260232 = score(doc=5205,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3103276 = fieldWeight in 5205, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5205)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We provide a model for the reception of knowledge from externalized information sources. The model is based on a cognitive understanding of information processing and draws up ideas of an exchange of information in communication processes. Karl Popper's three-world theory with its orientation on falsifiable scientific knowledge is extended by John Searle's concept of collective intentionality. This allows a consistent description of externalization and reception of knowledge including scientific knowledge as well as everyday knowledge.
    Theme
    Information
    Type
    a
  16. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.009850507 = product of:
      0.024626266 = sum of:
        0.005779455 = weight(_text_:a in 4649) [ClassicSimilarity], result of:
          0.005779455 = score(doc=4649,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 4649, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.018846812 = product of:
          0.037693623 = sum of:
            0.037693623 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.037693623 = score(doc=4649,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  17. Voß, J.: Classification of knowledge organization systems with Wikidata (2016) 0.01
    0.009850507 = product of:
      0.024626266 = sum of:
        0.005779455 = weight(_text_:a in 3082) [ClassicSimilarity], result of:
          0.005779455 = score(doc=3082,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 3082, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3082)
        0.018846812 = product of:
          0.037693623 = sum of:
            0.037693623 = weight(_text_:22 in 3082) [ClassicSimilarity], result of:
              0.037693623 = score(doc=3082,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23214069 = fieldWeight in 3082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3082)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents a crowd-sourced classification of knowledge organization systems based on open knowledge base Wikidata. The focus is less on the current result in its rather preliminary form but on the environment and process of categorization in Wikidata and the extraction of KOS from the collaborative database. Benefits and disadvantages are summarized and discussed for application to knowledge organization of other subject areas with Wikidata.
    Pages
    S.15-22
    Type
    a
  18. Kluge, A.; Singer, W.: ¬Das Gehirn braucht so viel Strom wie die Glühbirne (2012) 0.01
    0.009850507 = product of:
      0.024626266 = sum of:
        0.005779455 = weight(_text_:a in 4167) [ClassicSimilarity], result of:
          0.005779455 = score(doc=4167,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 4167, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4167)
        0.018846812 = product of:
          0.037693623 = sum of:
            0.037693623 = weight(_text_:22 in 4167) [ClassicSimilarity], result of:
              0.037693623 = score(doc=4167,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23214069 = fieldWeight in 4167, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4167)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 2.2018 18:10:21
    Type
    a
  19. Dowding, H.; Gengenbach, M.; Graham, B.; Meister, S.; Moran, J.; Peltzman, S.; Seifert, J.; Waugh, D.: OSS4EVA: using open-source tools to fulfill digital preservation requirements (2016) 0.01
    0.009619041 = product of:
      0.024047602 = sum of:
        0.008341924 = weight(_text_:a in 3200) [ClassicSimilarity], result of:
          0.008341924 = score(doc=3200,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 3200, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3200)
        0.015705677 = product of:
          0.031411353 = sum of:
            0.031411353 = weight(_text_:22 in 3200) [ClassicSimilarity], result of:
              0.031411353 = score(doc=3200,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19345059 = fieldWeight in 3200, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3200)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper builds on the findings of a workshop held at the 2015 International Conference on Digital Preservation (iPRES), entitled, "Using Open-Source Tools to Fulfill Digital Preservation Requirements" (OSS4PRES hereafter). This day-long workshop brought together participants from across the library and archives community, including practitioners proprietary vendors, and representatives from open-source projects. The resulting conversations were surprisingly revealing: while OSS' significance within the preservation landscape was made clear, participants noted that there are a number of roadblocks that discourage or altogether prevent its use in many organizations. Overcoming these challenges will be necessary to further widespread, sustainable OSS adoption within the digital preservation community. This article will mine the rich discussions that took place at OSS4PRES to (1) summarize the workshop's key themes and major points of debate, (2) provide a comprehensive analysis of the opportunities, gaps, and challenges that using OSS entails at a philosophical, institutional, and individual level, and (3) offer a tangible set of recommendations for future work designed to broaden community engagement and enhance the sustainability of open source initiatives, drawing on both participants' experience as well as additional research.
    Date
    28.10.2016 18:22:33
    Type
    a
  20. Bawden, D.; Robinson, L.: Information and the gaining of understanding (2015) 0.01
    0.009554823 = product of:
      0.023887057 = sum of:
        0.008258085 = weight(_text_:a in 893) [ClassicSimilarity], result of:
          0.008258085 = score(doc=893,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1544581 = fieldWeight in 893, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=893)
        0.015628971 = product of:
          0.031257942 = sum of:
            0.031257942 = weight(_text_:information in 893) [ClassicSimilarity], result of:
              0.031257942 = score(doc=893,freq=16.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3840108 = fieldWeight in 893, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=893)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    It is suggested that, in addition to data, information and knowledge, the information sciences should focus on understanding, understood as a higher-order knowledge, with coherent and explanatory potential. The limited ways in which understanding has been addressed in the design of information systems, in studies of information behaviour, in formulations of information literacy and in impact studies are briefly reviewed, and future prospects considered. The paper is an extended version of a keynote presentation given at the i3 conference in June 2015.
    Source
    Journal of information science. 41(2015) no.x, S.1-6
    Theme
    Information
    Type
    a

Languages

  • d 351
  • e 293
  • i 6
  • f 2
  • a 1
  • el 1
  • es 1
  • no 1
  • More… Less…

Types

  • a 504
  • s 14
  • r 10
  • x 9
  • m 7
  • n 6
  • i 2
  • b 1
  • More… Less…

Themes