Search (701 results, page 1 of 36)

  • × year_i:[2010 TO 2020}
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.25
    0.24842279 = product of:
      0.49684557 = sum of:
        0.12421139 = product of:
          0.37263417 = sum of:
            0.37263417 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.37263417 = score(doc=973,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.37263417 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.37263417 = score(doc=973,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  2. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.21
    0.207019 = product of:
      0.414038 = sum of:
        0.1035095 = product of:
          0.3105285 = sum of:
            0.3105285 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.3105285 = score(doc=1826,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.3105285 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.3105285 = score(doc=1826,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(2/4)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  3. Suchenwirth, L.: Sacherschliessung in Zeiten von Corona : neue Herausforderungen und Chancen (2019) 0.16
    0.16279891 = product of:
      0.32559782 = sum of:
        0.062105697 = product of:
          0.18631709 = sum of:
            0.18631709 = weight(_text_:3a in 484) [ClassicSimilarity], result of:
              0.18631709 = score(doc=484,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.56201804 = fieldWeight in 484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=484)
          0.33333334 = coord(1/3)
        0.26349214 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.26349214 = score(doc=484,freq=4.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
      0.5 = coord(2/4)
    
    Footnote
    https%3A%2F%2Fjournals.univie.ac.at%2Findex.php%2Fvoebm%2Farticle%2Fdownload%2F5332%2F5271%2F&usg=AOvVaw2yQdFGHlmOwVls7ANCpTii.
  4. Gödert, W.; Lepsky, K.: Informationelle Kompetenz : ein humanistischer Entwurf (2019) 0.14
    0.1449133 = product of:
      0.2898266 = sum of:
        0.07245665 = product of:
          0.21736994 = sum of:
            0.21736994 = weight(_text_:3a in 5955) [ClassicSimilarity], result of:
              0.21736994 = score(doc=5955,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.65568775 = fieldWeight in 5955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5955)
          0.33333334 = coord(1/3)
        0.21736994 = weight(_text_:2f in 5955) [ClassicSimilarity], result of:
          0.21736994 = score(doc=5955,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.65568775 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: Philosophisch-ethische Rezensionen vom 09.11.2019 (Jürgen Czogalla), Unter: https://philosophisch-ethische-rezensionen.de/rezension/Goedert1.html. In: B.I.T. online 23(2020) H.3, S.345-347 (W. Sühl-Strohmenger) [Unter: https%3A%2F%2Fwww.b-i-t-online.de%2Fheft%2F2020-03-rezensionen.pdf&usg=AOvVaw0iY3f_zNcvEjeZ6inHVnOK]. In: Open Password Nr. 805 vom 14.08.2020 (H.-C. Hobohm) [Unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzE0MywiOGI3NjZkZmNkZjQ1IiwwLDAsMTMxLDFd].
  5. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.12
    0.12421139 = product of:
      0.24842279 = sum of:
        0.062105697 = product of:
          0.18631709 = sum of:
            0.18631709 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.18631709 = score(doc=400,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.18631709 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18631709 = score(doc=400,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(2/4)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  6. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.11
    0.108532615 = product of:
      0.21706523 = sum of:
        0.0414038 = product of:
          0.12421139 = sum of:
            0.12421139 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.12421139 = score(doc=5820,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.17566143 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17566143 = score(doc=5820,freq=4.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(2/4)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  7. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.10
    0.1035095 = product of:
      0.207019 = sum of:
        0.05175475 = product of:
          0.15526424 = sum of:
            0.15526424 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.15526424 = score(doc=4997,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.15526424 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.15526424 = score(doc=4997,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.5 = coord(2/4)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  8. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.10
    0.1035095 = product of:
      0.207019 = sum of:
        0.05175475 = product of:
          0.15526424 = sum of:
            0.15526424 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.15526424 = score(doc=4388,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.15526424 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.15526424 = score(doc=4388,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.5 = coord(2/4)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  9. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.10
    0.1035095 = product of:
      0.207019 = sum of:
        0.05175475 = product of:
          0.15526424 = sum of:
            0.15526424 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.15526424 = score(doc=855,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.15526424 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.15526424 = score(doc=855,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.5 = coord(2/4)
    
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  10. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.10
    0.09845644 = product of:
      0.19691288 = sum of:
        0.18631709 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.18631709 = score(doc=563,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.010595793 = product of:
          0.031787377 = sum of:
            0.031787377 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.031787377 = score(doc=563,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  11. Wojdynski, B.W.; Kalyanaraman, S.: ¬The three dimensions of website navigability : explication and effects (2016) 0.09
    0.08693663 = product of:
      0.17387326 = sum of:
        0.16327746 = weight(_text_:logic in 2644) [ClassicSimilarity], result of:
          0.16327746 = score(doc=2644,freq=6.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.6924164 = fieldWeight in 2644, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=2644)
        0.010595793 = product of:
          0.031787377 = sum of:
            0.031787377 = weight(_text_:22 in 2644) [ClassicSimilarity], result of:
              0.031787377 = score(doc=2644,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.23214069 = fieldWeight in 2644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2644)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Although the navigability of digital interfaces has been long discussed as a key determinant of media effects of web use, existing scholarship has not yielded a clear conceptual understanding of navigability, nor how to measure perceived navigability as an outcome. The present paper attempts to redress both and proposes that navigability be conceptually examined along three dimensions, namely, logic of structure, clarity of structure, and clarity of target. A 2?×?2?×?2 factorial between-subjects experiment (N?=?128) was conducted to examine distinct contributions of these dimensions to perceptions of a nonprofit website. The results showed significant effects for logic of structure and clarity on perceived navigability, while logic of structure and content domain involvement affected attitudes toward the website.
    Date
    22. 1.2016 14:18:13
  12. Parrochia, D.; Neuville, D.: Towards a general theory of classifications (2013) 0.08
    0.08297182 = product of:
      0.16594364 = sum of:
        0.1571138 = weight(_text_:logic in 3100) [ClassicSimilarity], result of:
          0.1571138 = score(doc=3100,freq=8.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.666278 = fieldWeight in 3100, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3100)
        0.008829828 = product of:
          0.026489483 = sum of:
            0.026489483 = weight(_text_:22 in 3100) [ClassicSimilarity], result of:
              0.026489483 = score(doc=3100,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.19345059 = fieldWeight in 3100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3100)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This book is an essay on the epistemology of classifications. Its main purpose is not to provide an exposition of an actual mathematical theory of classifications, that is, a general theory which would be available to any kind of them: hierarchical or non-hierarchical, ordinary or fuzzy, overlapping or not overlapping, finite or infinite, and so on, establishing a basis for all possible divisions of the real world. For the moment, such a theory remains nothing but a dream. Instead, the authors are essentially put forward a number of key questions. Their aim is rather to reveal the "state of art" of this dynamic field and the philosophy one may eventually adopt to go further. To this end they present some advances made in the course of the last century, discuss a few tricky problems that remain to be solved, and show the avenues open to those who no longer wish to stay on the wrong track. Researchers and professionals interested in the epistemology and philosophy of science, library science, logic and set theory, order theory or cluster analysis will find this book a comprehensive, original and progressive introduction to the main questions in this field.
    Date
    8. 9.2016 22:04:09
    LCSH
    Logic, Symbolic and mathematical
    Series
    Studies in universal logic
    Subject
    Logic, Symbolic and mathematical
  13. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.06
    0.062105697 = product of:
      0.24842279 = sum of:
        0.24842279 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.24842279 = score(doc=2188,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.25 = coord(1/4)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  14. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.05
    0.052697577 = product of:
      0.2107903 = sum of:
        0.2107903 = weight(_text_:logic in 761) [ClassicSimilarity], result of:
          0.2107903 = score(doc=761,freq=10.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.89390576 = fieldWeight in 761, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
      0.25 = coord(1/4)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
  15. Corrêa, C.A.; Kobashi, N.Y.: Automatic indexing and information visualization : a study based on paraconsistent logic (2012) 0.05
    0.048106086 = product of:
      0.19242434 = sum of:
        0.19242434 = weight(_text_:logic in 869) [ClassicSimilarity], result of:
          0.19242434 = score(doc=869,freq=12.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.8160206 = fieldWeight in 869, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=869)
      0.25 = coord(1/4)
    
    Abstract
    This paper reports a research to evaluate the potential and the effects of use of annotated Paraconsistent logic in automatic indexing. This logic attempts to deal with contradictions, concerned with studying and developing inconsistency-tolerant systems of logic. This logic, being flexible and containing logical states that go beyond the dichotomies yes and no, permits to advance the hypothesis that the results of indexing could be better than those obtained by traditional methods. Interactions between different disciplines, as information retrieval, automatic indexing, information visualization, and nonclassical logics were considered in this research. From the methodological point of view, an algorithm for treatment of uncertainty and imprecision, developed under the Paraconsistent logic, was used to modify the values of the weights assigned to indexing terms of the text collections. The tests were performed on an information visualization system named Projection Explorer (PEx), created at Institute of Mathematics and Computer Science (ICMC - USP São Carlos), with available source code. PEx uses traditional vector space model to represent documents of a collection. The results were evaluated by criteria built in the information visualization system itself, and demonstrated measurable gains in the quality of the displays, confirming the hypothesis that the use of the para-analyser under the conditions of the experiment has the ability to generate more effective clusters of similar documents. This is a point that draws attention, since the constitution of more significant clusters can be used to enhance information indexing and retrieval. It can be argued that the adoption of non-dichotomous (non-exclusive) parameters provides new possibilities to relate similar information.
  16. Zitt, M.; Lelu, A.; Bassecoulard, E.: Hybrid citation-word representations in science mapping : Portolan charts of research fields? (2011) 0.04
    0.043693364 = product of:
      0.08738673 = sum of:
        0.0785569 = weight(_text_:logic in 4130) [ClassicSimilarity], result of:
          0.0785569 = score(doc=4130,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.333139 = fieldWeight in 4130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4130)
        0.008829828 = product of:
          0.026489483 = sum of:
            0.026489483 = weight(_text_:22 in 4130) [ClassicSimilarity], result of:
              0.026489483 = score(doc=4130,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.19345059 = fieldWeight in 4130, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4130)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The mapping of scientific fields, based on principles established in the seventies, has recently shown a remarkable development and applications are now booming with progress in computing efficiency. We examine here the convergence of two thematic mapping approaches, citation-based and word-based, which rely on quite different sociological backgrounds. A corpus in the nanoscience field was broken down into research themes, using the same clustering technique on the 2 networks separately. The tool for comparison is the table of intersections of the M clusters (here M=50) built on either side. A classical visual exploitation of such contingency tables is based on correspondence analysis. We investigate a rearrangement of the intersection table (block modeling), resulting in pseudo-map. The interest of this representation for confronting the two breakdowns is discussed. The amount of convergence found is, in our view, a strong argument in favor of the reliability of bibliometric mapping. However, the outcomes are not convergent at the degree where they can be substituted for each other. Differences highlight the complementarity between approaches based on different networks. In contrast with the strong informetric posture found in recent literature, where lexical and citation markers are considered as miscible tokens, the framework proposed here does not mix the two elements at an early stage, in compliance with their contrasted logic.
    Date
    8. 1.2011 18:22:50
  17. Chen, Z.; Huang, Y.; Tian, J.; Liu, X.; Fu, K.; Huang, T.: Joint model for subsentence-level sentiment analysis with Markov logic (2015) 0.04
    0.03927845 = product of:
      0.1571138 = sum of:
        0.1571138 = weight(_text_:logic in 2210) [ClassicSimilarity], result of:
          0.1571138 = score(doc=2210,freq=8.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.666278 = fieldWeight in 2210, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2210)
      0.25 = coord(1/4)
    
    Abstract
    Sentiment analysis mainly focuses on the study of one's opinions that express positive or negative sentiments. With the explosive growth of web documents, sentiment analysis is becoming a hot topic in both academic research and system design. Fine-grained sentiment analysis is traditionally solved as a 2-step strategy, which results in cascade errors. Although joint models, such as joint sentiment/topic and maximum entropy (MaxEnt)/latent Dirichlet allocation, are proposed to tackle this problem of sentiment analysis, they focus on the joint learning of both aspects and sentiments. Thus, they are not appropriate to solve the cascade errors for sentiment analysis at the sentence or subsentence level. In this article, we present a novel jointly fine-grained sentiment analysis framework at the subsentence level with Markov logic. First, we divide the task into 2 separate stages (subjectivity classification and polarity classification). Then, the 2 separate stages are processed, respectively, with different feature sets, which are implemented by local formulas in Markov logic. Finally, global formulas in Markov logic are adopted to realize the interactions of the 2 separate stages. The joint inference of subjectivity and polarity helps prevent cascade errors. Experiments on a Chinese sentiment data set manifest that our joint model brings significant improvements.
  18. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.04
    0.03927845 = product of:
      0.1571138 = sum of:
        0.1571138 = weight(_text_:logic in 3934) [ClassicSimilarity], result of:
          0.1571138 = score(doc=3934,freq=8.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.666278 = fieldWeight in 3934, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
      0.25 = coord(1/4)
    
    LCSH
    Mathematical logic
    Mathematical Logic and Formal Languages
    Subject
    Mathematical logic
    Mathematical Logic and Formal Languages
  19. Andreas, H.: On frames and theory-elements of structuralism (2014) 0.03
    0.03401614 = product of:
      0.13606456 = sum of:
        0.13606456 = weight(_text_:logic in 3402) [ClassicSimilarity], result of:
          0.13606456 = score(doc=3402,freq=6.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.57701373 = fieldWeight in 3402, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3402)
      0.25 = coord(1/4)
    
    Abstract
    There are quite a few success stories illustrating philosophy's relevance to information science. One can cite, for example, Leibniz's work on a characteristica universalis and a corresponding calculus ratiocinator through which he aspired to reduce reasoning to calculating. It goes without saying that formal logic initiated research on decidability and computational complexity. But even beyond the realm of formal logic, philosophy has served as a source of inspiration for developments in information and computer science. At the end of the twentieth century, formal ontology emerged from a quest for a semantic foundation of information systems having a higher reusability than systems being available at the time. A success story that is less well documented is the advent of frame systems in computer science. Minsky is credited with having laid out the foundational ideas of such systems. There, the logic programming approach to knowledge representation is criticized by arguing that one should be more careful about the way human beings recognize objects and situations. Notably, the paper draws heavily on the writings of Kuhn and the Gestalt-theorists. It is not our intent, however, to document the traces of the frame idea in the works of philosophers. What follows is, rather, an exposition of a methodology for representing scientific knowledge that is essentially frame-like. This methodology is labelled as structuralist theory of science or, in short, as structuralism. The frame-like character of its basic meta-theoretical concepts makes structuralism likely to be useful in knowledge representation.
  20. Thomer, A.; Cheng, Y.-Y.; Schneider, J.; Twidale, M.; Ludäscher, B.: Logic-based schema alignment for natural history Mmuseum databases (2017) 0.03
    0.03401614 = product of:
      0.13606456 = sum of:
        0.13606456 = weight(_text_:logic in 4131) [ClassicSimilarity], result of:
          0.13606456 = score(doc=4131,freq=6.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.57701373 = fieldWeight in 4131, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4131)
      0.25 = coord(1/4)
    
    Abstract
    In natural history museums, knowledge organization systems have gradually been migrated from paper-based catalog ledgers to electronic databases; these databases in turn must be migrated from one platform or software version to another. These migrations are by no means straightforward, particularly when one data schema must be mapped to another-or, when a database has been used in other-than-its-intended manner. There are few tools or methods available to support the necessary work of comparing divergent data schemas. Here we present a proof-of-concept in which we compare two versions of a subset of the Specify 6 data model using Euler/X, a logic-based reasoning tool. Specify 6 is a popular natural history museum database system whose data model has undergone several changes over its lifespan. We use Euler/X to produce visualizations (called "possible worlds") of the different ways that two versions of this data model might be mapped to one another. This proof-of-concept lays groundwork for further approaches that could aid data curators in database migration and maintenance work. It also contributes to research on the unique challenges to knowledge organization within natural history museums, and on the applicability of logic-based approaches to database schema migration or crosswalking.

Languages

  • e 511
  • d 182
  • a 1
  • hu 1
  • i 1
  • More… Less…

Types

  • a 602
  • el 65
  • m 55
  • s 19
  • x 12
  • r 8
  • b 5
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications