Search (27051 results, page 2 of 1353)

  1. Northoff, G.: ¬The spontaneous brain : from the mind-body to the world-brain problem (2018) 0.17
    0.16524868 = product of:
      0.20656085 = sum of:
        0.016294096 = product of:
          0.081470475 = sum of:
            0.081470475 = weight(_text_:problem in 5432) [ClassicSimilarity], result of:
              0.081470475 = score(doc=5432,freq=12.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.45947897 = fieldWeight in 5432, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5432)
          0.2 = coord(1/5)
        0.0795246 = weight(_text_:philosophy in 5432) [ClassicSimilarity], result of:
          0.0795246 = score(doc=5432,freq=4.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.34493396 = fieldWeight in 5432, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.03125 = fieldNorm(doc=5432)
        0.009029076 = weight(_text_:of in 5432) [ClassicSimilarity], result of:
          0.009029076 = score(doc=5432,freq=8.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.13821793 = fieldWeight in 5432, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=5432)
        0.10171307 = product of:
          0.20342614 = sum of:
            0.20342614 = weight(_text_:mind in 5432) [ClassicSimilarity], result of:
              0.20342614 = score(doc=5432,freq=16.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.7801958 = fieldWeight in 5432, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5432)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    Philosophers have long debated the mind-body problem whether to attribute such mental features as consciousness to mind or to body. Meanwhile, neuroscientists search for empirical answers, seeking neural correlates for consciousness, self, and free will. In this book, Georg Northoff does not propose new solutions to the mind-body problem; instead, he questions the problem itself, arguing that it is an empirically, ontologically, and conceptually implausible way to address the existence and reality of mental features. We are better off, he contends, by addressing consciousness and other mental features in terms of the relationship between world and brain; philosophers should consider the world-brain problem rather than the mind-body problem. This calls for a Copernican shift in vantage point from within the mind or brain to beyond the brain in our consideration of mental features. Northoff, a neuroscientist, psychiatrist, and philosopher, explains that empirical evidence suggests that the brain's spontaneous activity and its spatiotemporal structure are central to aligning and integrating the brain within the world. This spatiotemporal structure allows the brain to extend beyond itself into body and world, creating the world-brain relation? that is central to mental features. Northoff makes his argument in empirical, ontological, and epistemic-methodological terms. He discusses current models of the brain and applies these models to recent data on neuronal features underlying consciousness and proposes the world-brain relation as the ontological predisposition for consciousness.
    LCSH
    Mind and body
    Neurosciences / Philosophy
    Subject
    Mind and body
    Neurosciences / Philosophy
  2. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.16
    0.16360113 = product of:
      0.27266854 = sum of:
        0.08986723 = product of:
          0.22466806 = sum of:
            0.16587181 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.16587181 = score(doc=76,freq=2.0), product of:
                0.35416332 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04177434 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
            0.058796246 = weight(_text_:problem in 76) [ClassicSimilarity], result of:
              0.058796246 = score(doc=76,freq=4.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.33160037 = fieldWeight in 76, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.4 = coord(2/5)
        0.16587181 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.16587181 = score(doc=76,freq=2.0), product of:
            0.35416332 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04177434 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.016929517 = weight(_text_:of in 76) [ClassicSimilarity], result of:
          0.016929517 = score(doc=76,freq=18.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.25915858 = fieldWeight in 76, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
      0.6 = coord(3/5)
    
    Abstract
    A summary of brain theory is given so far as it is contained within the framework of Localization Theory. Difficulties of this "conventional theory" are traced back to a specific deficiency: there is no way to express relations between active cells (as for instance their representing parts of the same object). A new theory is proposed to cure this deficiency. It introduces a new kind of dynamical control, termed synaptic modulation, according to which synapses switch between a conducting and a non- conducting state. The dynamics of this variable is controlled on a fast time scale by correlations in the temporal fine structure of cellular signals. Furthermore, conventional synaptic plasticity is replaced by a refined version. Synaptic modulation and plasticity form the basis for short-term and long-term memory, respectively. Signal correlations, shaped by the variable network, express structure and relationships within objects. In particular, the figure-ground problem may be solved in this way. Synaptic modulation introduces exibility into cerebral networks which is necessary to solve the invariance problem. Since momentarily useless connections are deactivated, interference between di erent memory traces can be reduced, and memory capacity increased, in comparison with conventional associative memory
    Content
    Originally published July 1981 as Internal Report 81-2, Dept. of Neurobiology, Max-Planck-Institute for Biophysical Chemistry, 3400 Gottingen, W.-Germany.
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
  3. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.16
    0.1592644 = product of:
      0.398161 = sum of:
        0.13276611 = product of:
          0.33191526 = sum of:
            0.2653949 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.2653949 = score(doc=140,freq=2.0), product of:
                0.35416332 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04177434 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
            0.066520356 = weight(_text_:problem in 140) [ClassicSimilarity], result of:
              0.066520356 = score(doc=140,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.375163 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.4 = coord(2/5)
        0.2653949 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.2653949 = score(doc=140,freq=2.0), product of:
            0.35416332 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04177434 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
      0.4 = coord(2/5)
    
    Abstract
    "Wer überhaupt spricht oder schreibt, sollte sich verständlich ausdrücken. Das ist eine auf den ersten Blick einleuchtende Forderung. denn wozu äußert er sich, wenn er nicht verstanden werden will?" (Luhmann 2005, 193) So einfach scheint unser Problem zu sein - doch so einfach ist es nicht.
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  4. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.16
    0.15923695 = product of:
      0.39809236 = sum of:
        0.066348724 = product of:
          0.33174363 = sum of:
            0.33174363 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.33174363 = score(doc=1826,freq=2.0), product of:
                0.35416332 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04177434 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.2 = coord(1/5)
        0.33174363 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.33174363 = score(doc=1826,freq=2.0), product of:
            0.35416332 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04177434 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.4 = coord(2/5)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  5. Mainzer, K.: ¬The emergence of self-conscious systems : from symbolic AI to embodied robotics (2014) 0.16
    0.15806854 = product of:
      0.19758567 = sum of:
        0.008315044 = product of:
          0.041575223 = sum of:
            0.041575223 = weight(_text_:problem in 3398) [ClassicSimilarity], result of:
              0.041575223 = score(doc=3398,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23447686 = fieldWeight in 3398, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3398)
          0.2 = coord(1/5)
        0.12174669 = weight(_text_:philosophy in 3398) [ClassicSimilarity], result of:
          0.12174669 = score(doc=3398,freq=6.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.52807015 = fieldWeight in 3398, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3398)
        0.022572692 = weight(_text_:of in 3398) [ClassicSimilarity], result of:
          0.022572692 = score(doc=3398,freq=32.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34554482 = fieldWeight in 3398, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3398)
        0.04495125 = product of:
          0.0899025 = sum of:
            0.0899025 = weight(_text_:mind in 3398) [ClassicSimilarity], result of:
              0.0899025 = score(doc=3398,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.34480107 = fieldWeight in 3398, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3398)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    Knowledge representation, which is today used in database applications, artificial intelligence (AI), software engineering and many other disciplines of computer science has deep roots in logic and philosophy. In the beginning, there was Aristotle (384 bc-322 bc) who developed logic as a precise method for reasoning about knowledge. Syllogisms were introduced as formal patterns for representing special figures of logical deductions. According to Aristotle, the subject of ontology is the study of categories of things that exist or may exist in some domain. In modern times, Descartes considered the human brain as a store of knowledge representation. Recognition was made possible by an isomorphic correspondence between internal geometrical representations (ideae) and external situations and events. Leibniz was deeply influenced by these traditions. In his mathesis universalis, he required a universal formal language (lingua universalis) to represent human thinking by calculation procedures and to implement them by means of mechanical calculating machines. An ars iudicandi should allow every problem to be decided by an algorithm after representation in numeric symbols. An ars iveniendi should enable users to seek and enumerate desired data and solutions of problems. In the age of mechanics, knowledge representation was reduced to mechanical calculation procedures. In the twentieth century, computational cognitivism arose in the wake of Turing's theory of computability. In its functionalism, the hardware of a computer is related to the wetware of the human brain. The mind is understood as the software of a computer.
    Series
    History and philosophy of technoscience; 3
    Source
    Philosophy, computing and information science. Eds.: R. Hagengruber u. U.V. Riss
  6. Huth, M.: Symbolic and sub-symbolic knowledge organization in the Computational Theory of Mind (1995) 0.16
    0.15612905 = product of:
      0.26021507 = sum of:
        0.008315044 = product of:
          0.041575223 = sum of:
            0.041575223 = weight(_text_:problem in 1086) [ClassicSimilarity], result of:
              0.041575223 = score(doc=1086,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23447686 = fieldWeight in 1086, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1086)
          0.2 = coord(1/5)
        0.022572692 = weight(_text_:of in 1086) [ClassicSimilarity], result of:
          0.022572692 = score(doc=1086,freq=32.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34554482 = fieldWeight in 1086, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1086)
        0.22932734 = sum of:
          0.20102811 = weight(_text_:mind in 1086) [ClassicSimilarity], result of:
            0.20102811 = score(doc=1086,freq=10.0), product of:
              0.2607373 = queryWeight, product of:
                6.241566 = idf(docFreq=233, maxDocs=44218)
                0.04177434 = queryNorm
              0.77099866 = fieldWeight in 1086, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                6.241566 = idf(docFreq=233, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1086)
          0.028299233 = weight(_text_:22 in 1086) [ClassicSimilarity], result of:
            0.028299233 = score(doc=1086,freq=2.0), product of:
              0.14628662 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04177434 = queryNorm
              0.19345059 = fieldWeight in 1086, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1086)
      0.6 = coord(3/5)
    
    Abstract
    We sketch the historic transformation of culturally grown techniques of symbol manipulation, such as basic arithmetic in the decimal number system, to the full-fledges version of the Computational Theory of Mind. Symbol manipulation systems had been considered by Leibniz as a methodology of inferring knowledge in a secure and purely mechanical fashion. Such 'inference calculi' were considered as mer artefacts which could not possibly encompass als human knowldge acquisition. In Alan Turing's work one notices a crucial shift of perspective. The abstract mathematical states of a Turing machine (a kind of 'calculus universalis' that Leibniz was looking for) are claimed to correspond th equivalent psychological states. Artefacts are turned into faithful models of human cognition. A further step toward the Computational Theory of Mind was the physical symbol system hypothesis, contending to have found a necessary and sifficient criterion for the presence of 'intelligence' in operative mediums. This, together with Chomsky's foundational work on linguistics, led naturally to the Computational Theory of Mind as set out by Jerry Fodor and Zenon Pylshyn. We discuss problematic aspects of this theory. Then we deal with another paradigm of the Computational Theory of Mind based on network automata. This sub-symbolic paradigm seems to avoid problems occuring in symbolic computations, like the proble 'frame problem' and 'graceful degradation'
    Source
    Knowledge organization. 22(1995) no.1, S.10-17
  7. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.15
    0.15326574 = product of:
      0.2554429 = sum of:
        0.03980924 = product of:
          0.19904618 = sum of:
            0.19904618 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.19904618 = score(doc=2918,freq=2.0), product of:
                0.35416332 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04177434 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.2 = coord(1/5)
        0.19904618 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.19904618 = score(doc=2918,freq=2.0), product of:
            0.35416332 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04177434 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.016587472 = weight(_text_:of in 2918) [ClassicSimilarity], result of:
          0.016587472 = score(doc=2918,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.25392252 = fieldWeight in 2918, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.6 = coord(3/5)
    
    Abstract
    The employees of an organization often use a personal hierarchical classification scheme to organize digital documents that are stored on their own workstations. As this may make it hard for other employees to retrieve these documents, there is a risk that the organization will lose track of needed documentation. Furthermore, the inherent boundaries of such a hierarchical structure require making arbitrary decisions about which specific criteria the classification will b.e based on (for instance, the administrative activity or the document type, although a document can have several attributes and require classification in several classes).A faceted classification model to support corporate information organization is proposed. Partially based on Ranganathan's facets theory, this model aims not only to standardize the organization of digital documents, but also to simplify the management of a document throughout its life cycle for both individuals and organizations, while ensuring compliance to regulatory and policy requirements.
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  8. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.15
    0.15326574 = product of:
      0.2554429 = sum of:
        0.03980924 = product of:
          0.19904618 = sum of:
            0.19904618 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.19904618 = score(doc=400,freq=2.0), product of:
                0.35416332 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04177434 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.2 = coord(1/5)
        0.19904618 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.19904618 = score(doc=400,freq=2.0), product of:
            0.35416332 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04177434 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.016587472 = weight(_text_:of in 400) [ClassicSimilarity], result of:
          0.016587472 = score(doc=400,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.25392252 = fieldWeight in 400, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6 = coord(3/5)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  9. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.15
    0.1523986 = product of:
      0.25399765 = sum of:
        0.03980924 = product of:
          0.19904618 = sum of:
            0.19904618 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.19904618 = score(doc=862,freq=2.0), product of:
                0.35416332 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04177434 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.2 = coord(1/5)
        0.19904618 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.19904618 = score(doc=862,freq=2.0), product of:
            0.35416332 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04177434 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.015142222 = weight(_text_:of in 862) [ClassicSimilarity], result of:
          0.015142222 = score(doc=862,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.23179851 = fieldWeight in 862, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.6 = coord(3/5)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  10. Robinson, L.; Bawden, D.: Mind the gap : transitions between concepts of information in varied domains (2014) 0.15
    0.1500192 = product of:
      0.25003198 = sum of:
        0.14058095 = weight(_text_:philosophy in 1315) [ClassicSimilarity], result of:
          0.14058095 = score(doc=1315,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.60976285 = fieldWeight in 1315, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.078125 = fieldNorm(doc=1315)
        0.019548526 = weight(_text_:of in 1315) [ClassicSimilarity], result of:
          0.019548526 = score(doc=1315,freq=6.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2992506 = fieldWeight in 1315, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=1315)
        0.0899025 = product of:
          0.179805 = sum of:
            0.179805 = weight(_text_:mind in 1315) [ClassicSimilarity], result of:
              0.179805 = score(doc=1315,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.68960214 = fieldWeight in 1315, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1315)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Series
    Studies in history and philosophy of science ; 34
    Source
    Theories of information, communication and knowledge : a multidisciplinary approach. Eds.: F. Ibekwe-SanJuan u. T.M. Dousa
  11. Robinson, G.: Time out of mind : a critical consideration of Table 1g (2000) 0.14
    0.14132427 = product of:
      0.35331064 = sum of:
        0.022345824 = weight(_text_:of in 369) [ClassicSimilarity], result of:
          0.022345824 = score(doc=369,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34207192 = fieldWeight in 369, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=369)
        0.33096483 = sum of:
          0.25172698 = weight(_text_:mind in 369) [ClassicSimilarity], result of:
            0.25172698 = score(doc=369,freq=2.0), product of:
              0.2607373 = queryWeight, product of:
                6.241566 = idf(docFreq=233, maxDocs=44218)
                0.04177434 = queryNorm
              0.96544296 = fieldWeight in 369, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.241566 = idf(docFreq=233, maxDocs=44218)
                0.109375 = fieldNorm(doc=369)
          0.07923785 = weight(_text_:22 in 369) [ClassicSimilarity], result of:
            0.07923785 = score(doc=369,freq=2.0), product of:
              0.14628662 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04177434 = queryNorm
              0.5416616 = fieldWeight in 369, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=369)
      0.4 = coord(2/5)
    
    Source
    Extensions and corrections to the UDC. 22(2000), S.28-31
  12. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.14
    0.14110757 = product of:
      0.23517928 = sum of:
        0.19904618 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.19904618 = score(doc=563,freq=2.0), product of:
            0.35416332 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04177434 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.019153563 = weight(_text_:of in 563) [ClassicSimilarity], result of:
          0.019153563 = score(doc=563,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 563, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.033959076 = score(doc=563,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
    Imprint
    Guelph, Ontario : University of Guelph
  13. Thornley, C.; Gibb, F.: Meaning in philosophy and meaning in information retrieval (IR) (2009) 0.13
    0.13443762 = product of:
      0.2240627 = sum of:
        0.18597113 = weight(_text_:philosophy in 2682) [ClassicSimilarity], result of:
          0.18597113 = score(doc=2682,freq=14.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.80664045 = fieldWeight in 2682, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2682)
        0.023941955 = weight(_text_:of in 2682) [ClassicSimilarity], result of:
          0.023941955 = score(doc=2682,freq=36.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.36650562 = fieldWeight in 2682, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2682)
        0.0141496165 = product of:
          0.028299233 = sum of:
            0.028299233 = weight(_text_:22 in 2682) [ClassicSimilarity], result of:
              0.028299233 = score(doc=2682,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.19345059 = fieldWeight in 2682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2682)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Purpose - The purpose of this paper is to explore the question of whether the differences between meaning in philosophy and meaning in information retrieval (IR) have implications for the use of philosophy in supporting research in IR. Design/methodology/approach - The approach takes the form of a conceptual analysis and literature review. Findings - There are some differences in the role of meaning in terms of purpose, content and use which should be clarified in order to assist a productive relationship between the philosophy of language and IR. Research limitations/implications - This provides some new theoretical insights into the philosophical context of IR. It suggests that further productive work on the central concepts within IR could be achieved through the use of a methodology which analyses how exactly these concepts are discussed in other disciplines and the implications of any differences in the way in which they may operate in IR. Originality/value - The paper suggests a new perspective on the relationship between philosophy and IR by exploring the role of meaning in these respective disciplines and highlighting differences, as well as similarities, with particular reference to the role of information as well as meaning in IR. This contributes to an understanding of two of the central concepts in IR, meaning and information, and the ways in which they are related. There is a history of work in IR and information science (IS) examining dilemmas and the paper builds on this work by relating it to some similar dilemmas in philosophy. Thus it develops the theory and conceptual understanding of IR by suggesting that philosophy could be used as a way of exploring intractable dilemmas in IR.
    Date
    23. 2.2009 17:22:29
    Source
    Journal of documentation. 65(2009) no.1, S.133-150
  14. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.13
    0.1320966 = product of:
      0.220161 = sum of:
        0.033174362 = product of:
          0.16587181 = sum of:
            0.16587181 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.16587181 = score(doc=4997,freq=2.0), product of:
                0.35416332 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04177434 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.2 = coord(1/5)
        0.16587181 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.16587181 = score(doc=4997,freq=2.0), product of:
            0.35416332 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04177434 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.02111482 = weight(_text_:of in 4997) [ClassicSimilarity], result of:
          0.02111482 = score(doc=4997,freq=28.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.32322758 = fieldWeight in 4997, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.6 = coord(3/5)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
    Imprint
    Trento : University / Department of information engineering and computer science
  15. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.13
    0.13028319 = product of:
      0.21713865 = sum of:
        0.06638306 = product of:
          0.16595763 = sum of:
            0.13269745 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13269745 = score(doc=701,freq=2.0), product of:
                0.35416332 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04177434 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
            0.033260178 = weight(_text_:problem in 701) [ClassicSimilarity], result of:
              0.033260178 = score(doc=701,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.1875815 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.4 = coord(2/5)
        0.13269745 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.13269745 = score(doc=701,freq=2.0), product of:
            0.35416332 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04177434 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.018058153 = weight(_text_:of in 701) [ClassicSimilarity], result of:
          0.018058153 = score(doc=701,freq=32.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27643585 = fieldWeight in 701, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.6 = coord(3/5)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  16. Blair, D.: Wittgenstein, language and information : "Back to the Rough Ground!" (2006) 0.13
    0.12971117 = product of:
      0.21618526 = sum of:
        0.1590492 = weight(_text_:philosophy in 828) [ClassicSimilarity], result of:
          0.1590492 = score(doc=828,freq=16.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.6898679 = fieldWeight in 828, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.03125 = fieldNorm(doc=828)
        0.021175062 = weight(_text_:of in 828) [ClassicSimilarity], result of:
          0.021175062 = score(doc=828,freq=44.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3241498 = fieldWeight in 828, product of:
              6.6332498 = tf(freq=44.0), with freq of:
                44.0 = termFreq=44.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=828)
        0.035961 = product of:
          0.071922 = sum of:
            0.071922 = weight(_text_:mind in 828) [ClassicSimilarity], result of:
              0.071922 = score(doc=828,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.27584085 = fieldWeight in 828, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.03125 = fieldNorm(doc=828)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    This book is an extension of the discussions presented in Blair's 1990 book "Language and Representation in Information Retrieval", which was selected as the "Best Information Science Book of the Year" by the American Society for Information Science (ASIS). That work stated that the Philosophy of Language had the best theory for understanding meaning in language, and within the Philosophy of Language, the work of philosopher Ludwig Wittgenstein was found to be most perceptive. The success of that book provided an incentive to look more deeply into Wittgenstein's philosophy of language, and how it can help us to understand how to represent the intellectual content of information. This is what the current title does, and by using this theory it creates a firm foundation for future Information Retrieval research. The work consists of four related parts. Firstly, a brief overview of Wittgenstein's philosophy of language and its relevance to information systems. Secondly, a detailed explanation of Wittgenstein's late philosophy of language and mind. Thirdly, an extended discussion of the relevance of his philosophy to understanding some of the problems inherent in information systems, especially those systems which rely on retrieval based on some representation of the intellectual content of that information. And, fourthly, a series of detailed footnotes which cite the sources of the numerous quotations and provide some discussion of the related issues that the text inspires.
    Footnote
    Rez. in: Journal of Documentation 63(2007) no.2, S.xxx-xxx (B. Hjoerland)
    LCSH
    Language and languages / Philosophy
    Subject
    Language and languages / Philosophy
  17. Suchenwirth, L.: Sacherschliessung in Zeiten von Corona : neue Herausforderungen und Chancen (2019) 0.13
    0.1285212 = product of:
      0.321303 = sum of:
        0.03980924 = product of:
          0.19904618 = sum of:
            0.19904618 = weight(_text_:3a in 484) [ClassicSimilarity], result of:
              0.19904618 = score(doc=484,freq=2.0), product of:
                0.35416332 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04177434 = queryNorm
                0.56201804 = fieldWeight in 484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=484)
          0.2 = coord(1/5)
        0.28149378 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.28149378 = score(doc=484,freq=4.0), product of:
            0.35416332 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04177434 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
      0.4 = coord(2/5)
    
    Footnote
    https%3A%2F%2Fjournals.univie.ac.at%2Findex.php%2Fvoebm%2Farticle%2Fdownload%2F5332%2F5271%2F&usg=AOvVaw2yQdFGHlmOwVls7ANCpTii.
  18. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.13
    0.12838598 = product of:
      0.2139766 = sum of:
        0.033174362 = product of:
          0.16587181 = sum of:
            0.16587181 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.16587181 = score(doc=855,freq=2.0), product of:
                0.35416332 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04177434 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.2 = coord(1/5)
        0.16587181 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.16587181 = score(doc=855,freq=2.0), product of:
            0.35416332 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04177434 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.014930432 = weight(_text_:of in 855) [ClassicSimilarity], result of:
          0.014930432 = score(doc=855,freq=14.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.22855641 = fieldWeight in 855, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.6 = coord(3/5)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  19. Prokop, M.: Hans Jonas and the phenomenological continuity of life and mind (2022) 0.13
    0.12834376 = product of:
      0.21390626 = sum of:
        0.09940575 = weight(_text_:philosophy in 1048) [ClassicSimilarity], result of:
          0.09940575 = score(doc=1048,freq=4.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.43116745 = fieldWeight in 1048, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1048)
        0.024598021 = weight(_text_:of in 1048) [ClassicSimilarity], result of:
          0.024598021 = score(doc=1048,freq=38.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.37654874 = fieldWeight in 1048, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1048)
        0.0899025 = product of:
          0.179805 = sum of:
            0.179805 = weight(_text_:mind in 1048) [ClassicSimilarity], result of:
              0.179805 = score(doc=1048,freq=8.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.68960214 = fieldWeight in 1048, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1048)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    This paper offers a novel interpretation of Hans Jonas' analysis of metabolism, the centrepiece of Jonas' philosophy of organism, in relation to recent controversies regarding the phenomenological dimension of life-mind continuity as understood within 'autopoietic' enactivism (AE). Jonas' philosophy of organism chiefly inspired AE's development of what we might call 'the phenomenological life-mind continuity thesis' (PLMCT), the claim that certain phenomenological features of human experience are central to a proper scientific understanding of both life and mind, and as such central features of all living organisms. After discussing the understanding of PLMCT within AE, and recent criticisms thereof, I develop a reading of Jonas' analysis of metabolism, in light of previous commentators, which emphasizes its systematicity and transcendental flavour. The central thought is that, for Jonas, the attribution of certain phenomenological features is a necessary precondition for our understanding of the possibility of metabolism, rather than being derivable from metabolism itself. I argue that my interpretation strengthens Jonas' contribution to AE's justification for ascribing certain phenomenological features to life across the board. However, it also emphasises the need to complement Jonas' analysis with an explanatory account of organic identity in order to vindicate these phenomenological ascriptions in a scientific context.
  20. Marradi, A.: ¬The concept of concept : concepts and terms (2012) 0.13
    0.12746051 = product of:
      0.21243417 = sum of:
        0.070290476 = weight(_text_:philosophy in 33) [ClassicSimilarity], result of:
          0.070290476 = score(doc=33,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.30488142 = fieldWeight in 33, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0390625 = fieldNorm(doc=33)
        0.023941955 = weight(_text_:of in 33) [ClassicSimilarity], result of:
          0.023941955 = score(doc=33,freq=36.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.36650562 = fieldWeight in 33, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=33)
        0.11820173 = sum of:
          0.0899025 = weight(_text_:mind in 33) [ClassicSimilarity], result of:
            0.0899025 = score(doc=33,freq=2.0), product of:
              0.2607373 = queryWeight, product of:
                6.241566 = idf(docFreq=233, maxDocs=44218)
                0.04177434 = queryNorm
              0.34480107 = fieldWeight in 33, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.241566 = idf(docFreq=233, maxDocs=44218)
                0.0390625 = fieldNorm(doc=33)
          0.028299233 = weight(_text_:22 in 33) [ClassicSimilarity], result of:
            0.028299233 = score(doc=33,freq=2.0), product of:
              0.14628662 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04177434 = queryNorm
              0.19345059 = fieldWeight in 33, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=33)
      0.6 = coord(3/5)
    
    Abstract
    The concept of concept has seldom been examined in its entirety, and the term very seldom defined. The rigidity, or lack thereof, and the homogeneity, or lack thereof, of concepts, are only two of their characteristics that have been debated. These issues are reviewed in this paper, namely: 1) does a concept represent its referent(s), or is it a free creation of the mind?; 2) can a concept be analyzed in parts or elements?; 3) must a concept be general, i.e., refer to a category or a type, or can it refer to a single object, physical or mental?; 4) are concepts as clearly delimited as terms are? Are concepts voiceless terms?; and, 5) what do terms contribute to an individual's and a community's conceptual richness? As regards the relationship of concepts with their referents in the stage of formation, it seems reasonable to conclude that said relationship may be close in some concepts, less close in others, and lacking altogether in some cases. The set of elements of a concept, which varies from individual to individual and across time inside the same individual, is called the intension of a concept. The set of referents of a concept is called the extension of that concept. Most concepts don't have a clearly delimited extension: their referents form a fuzzy set. The aspects of a concept's intension form a scale of generality. A concept is not equal to the term that describes it; rather, many terms are joined to concepts. Language, therefore, renders a gamut of services to the development, consolidation, and communication of conceptual richness.
    Date
    22. 1.2012 13:11:25
    Series
    Forum: The philosophy of classification

Authors

Languages

Types

Themes

Subjects

Classifications