Search (5 results, page 1 of 1)

  • × classification_ss:"020"
  • × year_i:[2010 TO 2020}
  1. Badia, A.: ¬The information manifold : why computers cannot solve algorithmic bias and fake news (2019) 0.02
    0.015815495 = product of:
      0.079077475 = sum of:
        0.079077475 = weight(_text_:semantic in 160) [ClassicSimilarity], result of:
          0.079077475 = score(doc=160,freq=10.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.41088465 = fieldWeight in 160, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.03125 = fieldNorm(doc=160)
      0.2 = coord(1/5)
    
    Abstract
    An argument that information exists at different levels of analysis-syntactic, semantic, and pragmatic-and an exploration of the implications. Although this is the Information Age, there is no universal agreement about what information really is. Different disciplines view information differently; engineers, computer scientists, economists, linguists, and philosophers all take varying and apparently disconnected approaches. In this book, Antonio Badia distinguishes four levels of analysis brought to bear on information: syntactic, semantic, pragmatic, and network-based. Badia explains each of these theoretical approaches in turn, discussing, among other topics, theories of Claude Shannon and Andrey Kolomogorov, Fred Dretske's description of information flow, and ideas on receiver impact and informational interactions. Badia argues that all these theories describe the same phenomena from different perspectives, each one narrower than the previous one. The syntactic approach is the more general one, but it fails to specify when information is meaningful to an agent, which is the focus of the semantic and pragmatic approaches. The network-based approach, meanwhile, provides a framework to understand information use among agents. Badia then explores the consequences of understanding information as existing at several levels. Humans live at the semantic and pragmatic level (and at the network level as a society), computers at the syntactic level. This sheds light on some recent issues, including "fake news" (computers cannot tell whether a statement is true or not, because truth is a semantic notion) and "algorithmic bias" (a pragmatic, not syntactic concern). Humans, not computers, the book argues, have the ability to solve these issues.
  2. Arafat, S.; Ashoori, E.: Search foundations : toward a science of technology-mediated experience (2018) 0.01
    0.009169802 = product of:
      0.04584901 = sum of:
        0.04584901 = weight(_text_:retrieval in 158) [ClassicSimilarity], result of:
          0.04584901 = score(doc=158,freq=12.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.32745665 = fieldWeight in 158, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=158)
      0.2 = coord(1/5)
    
    Abstract
    This book contributes to discussions within Information Retrieval and Science (IR&S) by improving our conceptual understanding of the relationship between humans and technology. A call to redirect the intellectual focus of information retrieval and science (IR&S) toward the phenomenon of technology-mediated experience. In this book, Sachi Arafat and Elham Ashoori issue a call to reorient the intellectual focus of information retrieval and science (IR&S) away from search and related processes toward the more general phenomenon of technology-mediated experience. Technology-mediated experience accounts for an increasing proportion of human lived experience; the phenomenon of mediation gets at the heart of the human-machine relationship. Framing IR&S more broadly in this way generalizes its problems and perspectives, dovetailing them with those shared across disciplines dealing with socio-technical phenomena. This reorientation of IR&S requires imagining it as a new kind of science: a science of technology-mediated experience (STME). Arafat and Ashoori not only offer detailed analysis of the foundational concepts underlying IR&S and other technical disciplines but also boldly call for a radical, systematic appropriation of the sciences and humanities to create a better understanding of the human-technology relationship. Arafat and Ashoori discuss the notion of progress in IR&S and consider ideas of progress from the history and philosophy of science. They argue that progress in IR&S requires explicit linking between technical and nontechnical aspects of discourse. They develop a network of basic questions and present a discursive framework for addressing these questions. With this book, Arafat and Ashoori provide both a manifesto for the reimagining of their field and the foundations on which a reframed IR&S would rest.
    Content
    The embedding of the foundational in the adhoc -- Notions of progress in information retrieval -- From growth to progress I : methodology for understanding progress -- From growth to progress II : the network of discourse -- Basic questions characterising foundations discourse -- Enduring nature of foundations -- Foundations as the way to the authoritative against the authoritarian : a conclusion
    LCSH
    Information retrieval
    Subject
    Information retrieval
  3. Borgman, C.L.: Big data, little data, no data : scholarship in the networked world (2015) 0.01
    0.005294188 = product of:
      0.026470939 = sum of:
        0.026470939 = weight(_text_:retrieval in 2785) [ClassicSimilarity], result of:
          0.026470939 = score(doc=2785,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.18905719 = fieldWeight in 2785, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2785)
      0.2 = coord(1/5)
    
    LCSH
    Information storage and retrieval systems
    Subject
    Information storage and retrieval systems
  4. Tüür-Fröhlich, T.: ¬The non-trivial effects of trivial errors in scientific communication and evaluation (2016) 0.00
    0.003743556 = product of:
      0.01871778 = sum of:
        0.01871778 = weight(_text_:retrieval in 3137) [ClassicSimilarity], result of:
          0.01871778 = score(doc=3137,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.13368362 = fieldWeight in 3137, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=3137)
      0.2 = coord(1/5)
    
    Abstract
    "Thomson Reuters' citation indexes i.e. SCI, SSCI and AHCI are said to be "authoritative". Due to the huge influence of these databases on global academic evaluation of productivity and impact, Terje Tüür-Fröhlich decided to conduct case studies on the data quality of Social Sciences Citation Index (SSCI) records. Tüür-Fröhlich investigated articles from social science and law. The main findings: SSCI records contain tremendous amounts of "trivial errors", not only misspellings and typos as previously mentioned in bibliometrics and scientometrics literature. But Tüür-Fröhlich's research documented fatal errors which have not been mentioned in the scientometrics literature yet at all. Tüür-Fröhlich found more than 80 fatal mutations and mutilations of Pierre Bourdieu (e.g. "Atkinson" or "Pierre, B. and "Pierri, B."). SSCI even generated zombie references (phantom authors and works) by data fields' confusion - a deadly sin for a database producer - as fragments of Patent Laws were indexed as fictional author surnames/initials. Additionally, horrific OCR-errors (e.g. "nuxure" instead of "Nature" as journal title) were identified. Tüür-Fröhlich´s extensive quantitative case study of an article of the Harvard Law Review resulted in a devastating finding: only 1% of all correct references from the original article were indexed by SSCI without any mistake or error. Many scientific communication experts and database providers' believe that errors in databanks are of less importance: There are many errors, yes - but they would counterbalance each other, errors would not result in citation losses and would not bear any effect on retrieval and evaluation outcomes. Terje Tüür-Fröhlich claims the contrary: errors and inconsistencies are not evenly distributed but linked with languages biases and publication cultures."
  5. Franz, G.: ¬Die vielen Wikipedias : Vielsprachigkeit als Zugang zu einer globalisierten Online-Welt (2011) 0.00
    0.002178703 = product of:
      0.010893514 = sum of:
        0.010893514 = product of:
          0.021787029 = sum of:
            0.021787029 = weight(_text_:web in 568) [ClassicSimilarity], result of:
              0.021787029 = score(doc=568,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.14422815 = fieldWeight in 568, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=568)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Series
    Reihe Web 2.0

Languages