Search (8 results, page 1 of 1)

  • × classification_ss:"020"
  1. Borgman, C.L.: Big data, little data, no data : scholarship in the networked world (2015) 0.02
    0.016015813 = product of:
      0.06406325 = sum of:
        0.06406325 = weight(_text_:communication in 2785) [ClassicSimilarity], result of:
          0.06406325 = score(doc=2785,freq=6.0), product of:
            0.19382635 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.04488925 = queryNorm
            0.33051878 = fieldWeight in 2785, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.03125 = fieldNorm(doc=2785)
      0.25 = coord(1/4)
    
    Abstract
    "Big Data" is on the covers of Science, Nature, the Economist, and Wired magazines, on the front pages of the Wall Street Journal and the New York Times. But despite the media hyperbole, as Christine Borgman points out in this examination of data and scholarly research, having the right data is usually better than having more data; little data can be just as valuable as big data. In many cases, there are no data -- because relevant data don't exist, cannot be found, or are not available. Moreover, data sharing is difficult, incentives to do so are minimal, and data practices vary widely across disciplines. Borgman, an often-cited authority on scholarly communication, argues that data have no value or meaning in isolation; they exist within a knowledge infrastructure -- an ecology of people, practices, technologies, institutions, material objects, and relationships. After laying out the premises of her investigation -- six "provocations" meant to inspire discussion about the uses of data in scholarship -- Borgman offers case studies of data practices in the sciences, the social sciences, and the humanities, and then considers the implications of her findings for scholarly practice and research policy. To manage and exploit data over the long term, Borgman argues, requires massive investment in knowledge infrastructures; at stake is the future of scholarship.
    LCSH
    Communication in learning and scholarship / Technological innovations
    Subject
    Communication in learning and scholarship / Technological innovations
  2. Badia, A.: ¬The information manifold : why computers cannot solve algorithmic bias and fake news (2019) 0.02
    0.016015813 = product of:
      0.06406325 = sum of:
        0.06406325 = weight(_text_:communication in 160) [ClassicSimilarity], result of:
          0.06406325 = score(doc=160,freq=6.0), product of:
            0.19382635 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.04488925 = queryNorm
            0.33051878 = fieldWeight in 160, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.03125 = fieldNorm(doc=160)
      0.25 = coord(1/4)
    
    Content
    Introduction -- Information as codes : Shannon, Kolmogorov and the start of it all -- Information as content : semantics, possible worlds and all that jazz -- Information as pragmatics : impact and consequences -- Information as communication : networks and the phenomenon of emergence -- Will the real information please stand up? -- Is Shannon's theory a theory of information? -- Computers and information I : what can computers do? -- Computers and information II : machine learning, big data and algorithic bias -- Humans and information --Conclusions : where from here?
    LCSH
    Communication / Philosophy
    Subject
    Communication / Philosophy
  3. Tüür-Fröhlich, T.: ¬The non-trivial effects of trivial errors in scientific communication and evaluation (2016) 0.01
    0.013076856 = product of:
      0.052307423 = sum of:
        0.052307423 = weight(_text_:communication in 3137) [ClassicSimilarity], result of:
          0.052307423 = score(doc=3137,freq=4.0), product of:
            0.19382635 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.04488925 = queryNorm
            0.26986745 = fieldWeight in 3137, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.03125 = fieldNorm(doc=3137)
      0.25 = coord(1/4)
    
    Abstract
    "Thomson Reuters' citation indexes i.e. SCI, SSCI and AHCI are said to be "authoritative". Due to the huge influence of these databases on global academic evaluation of productivity and impact, Terje Tüür-Fröhlich decided to conduct case studies on the data quality of Social Sciences Citation Index (SSCI) records. Tüür-Fröhlich investigated articles from social science and law. The main findings: SSCI records contain tremendous amounts of "trivial errors", not only misspellings and typos as previously mentioned in bibliometrics and scientometrics literature. But Tüür-Fröhlich's research documented fatal errors which have not been mentioned in the scientometrics literature yet at all. Tüür-Fröhlich found more than 80 fatal mutations and mutilations of Pierre Bourdieu (e.g. "Atkinson" or "Pierre, B. and "Pierri, B."). SSCI even generated zombie references (phantom authors and works) by data fields' confusion - a deadly sin for a database producer - as fragments of Patent Laws were indexed as fictional author surnames/initials. Additionally, horrific OCR-errors (e.g. "nuxure" instead of "Nature" as journal title) were identified. Tüür-Fröhlich´s extensive quantitative case study of an article of the Harvard Law Review resulted in a devastating finding: only 1% of all correct references from the original article were indexed by SSCI without any mistake or error. Many scientific communication experts and database providers' believe that errors in databanks are of less importance: There are many errors, yes - but they would counterbalance each other, errors would not result in citation losses and would not bear any effect on retrieval and evaluation outcomes. Terje Tüür-Fröhlich claims the contrary: errors and inconsistencies are not evenly distributed but linked with languages biases and publication cultures."
  4. Witschel, H.F.: Terminologie-Extraktion : Möglichkeiten der Kombination statistischer uns musterbasierter Verfahren (2004) 0.01
    0.011558416 = product of:
      0.046233665 = sum of:
        0.046233665 = weight(_text_:communication in 123) [ClassicSimilarity], result of:
          0.046233665 = score(doc=123,freq=2.0), product of:
            0.19382635 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.04488925 = queryNorm
            0.23853138 = fieldWeight in 123, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.0390625 = fieldNorm(doc=123)
      0.25 = coord(1/4)
    
    Series
    Content and communication; Bd.1
  5. Vickery, B.C.; Vickery, A.: Information science in theory and practice (2004) 0.01
    0.005779208 = product of:
      0.023116833 = sum of:
        0.023116833 = weight(_text_:communication in 4320) [ClassicSimilarity], result of:
          0.023116833 = score(doc=4320,freq=2.0), product of:
            0.19382635 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.04488925 = queryNorm
            0.11926569 = fieldWeight in 4320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4320)
      0.25 = coord(1/4)
    
    Footnote
    Soziologisch geprägt Das alles auf nicht einmal 350 Seiten: Hier kann es also immer nur um ein »Anreißen« gehen, was sich immer wieder, zum Beispiel in den Abschnitten zu Information Retrieval oder zu Formen der Wissensrepräsentation, schmerzhaft bemerkbar macht. Auf Klassifikationen, Thesauri, Formen des Abstracting und so weiter wird so gut wie nicht eingegangen. Hier ist generell zu fragen, ob die Gewichtung, die die Autoren vornehmen, sinnvoll ist. Ihr Ansatz, Informationswissenschaft zu beschreiben als »the study of the communication of information in sociery«, ist ein sehr weiter und findet seinen Niederschlag in überdimensionierten Abschnitten, die stark soziologisch geprägt sind, ohne wirklich erhellend zu sein; dazu sind die Aussagen, zum Beispiel zu Reichweiten von Kommunikation oder zu verschiedenen Kommunikationstypen, zu allgemein. Bedeutsamer, da dieser umfassende Ansatz überhaupt nicht durchgehalten wird, sondern sich immer stärker verengt hin auf Kommunikation von wissenschaftlicher Information, ist jedoch, dass auch dieses Buch letztlich den Eindruck hinterlässt, Informationswissenschaft sei ein Konglomerat miteinander relativ unverbundener Theorien und Bausteine. Dieser Eindruck, der sich beim Lesen auch deshalb immer wieder aufdrängt, weil sowohl die historische EntwicklungderDisziplin nur sehr verknappt (generell USA-zentriert) wie auch die Abgrenzung/Überschneidungzu anderen Wissenschaften wenig thematisiert wird (ganz stark spürbarim Kapitel 3 »Widercontexts of information transfer«), mildert sich etwas durch die sehr verdienstvolle Auflistung von bekannten Informationsspezialisten im Anhang und die Visualisierung der Informationswissenschaft, ihrer Zweige und bedeutender Vertreter, in Form einer Art »Landkarte«.
  6. Bertram, J.: Einführung in die inhaltliche Erschließung : Grundlagen - Methoden - Instrumente (2005) 0.00
    0.0046233665 = product of:
      0.018493466 = sum of:
        0.018493466 = weight(_text_:communication in 210) [ClassicSimilarity], result of:
          0.018493466 = score(doc=210,freq=2.0), product of:
            0.19382635 = queryWeight, product of:
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.04488925 = queryNorm
            0.09541255 = fieldWeight in 210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.317879 = idf(docFreq=1601, maxDocs=44218)
              0.015625 = fieldNorm(doc=210)
      0.25 = coord(1/4)
    
    Series
    Content and communication: Terminology, language resources and semantic interoperability; Bd.2
  7. Greifeneder, E.: Online-Hilfen in OPACs : Analyse deutscher Universitäts-Onlinekataloge (2007) 0.00
    0.003801171 = product of:
      0.015204684 = sum of:
        0.015204684 = product of:
          0.030409368 = sum of:
            0.030409368 = weight(_text_:22 in 1935) [ClassicSimilarity], result of:
              0.030409368 = score(doc=1935,freq=2.0), product of:
                0.1571945 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04488925 = queryNorm
                0.19345059 = fieldWeight in 1935, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1935)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2008 13:03:30
  8. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.00
    0.0030409365 = product of:
      0.012163746 = sum of:
        0.012163746 = product of:
          0.024327492 = sum of:
            0.024327492 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
              0.024327492 = score(doc=566,freq=2.0), product of:
                0.1571945 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04488925 = queryNorm
                0.15476047 = fieldWeight in 566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=566)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together

Languages