Search (1404 results, page 1 of 71)

  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.45
    0.4521981 = product of:
      0.7913466 = sum of:
        0.07913466 = product of:
          0.23740397 = sum of:
            0.23740397 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.23740397 = score(doc=140,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.33333334 = coord(1/3)
        0.23740397 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.23740397 = score(doc=140,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
        0.23740397 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.23740397 = score(doc=140,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
        0.23740397 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.23740397 = score(doc=140,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
      0.5714286 = coord(4/7)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.43
    0.43478474 = product of:
      0.6086986 = sum of:
        0.059350993 = product of:
          0.17805298 = sum of:
            0.17805298 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.17805298 = score(doc=562,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.17805298 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17805298 = score(doc=562,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.17805298 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17805298 = score(doc=562,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.17805298 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17805298 = score(doc=562,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.015188723 = product of:
          0.030377446 = sum of:
            0.030377446 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.030377446 = score(doc=562,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.71428573 = coord(5/7)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.40
    0.3956733 = product of:
      0.69242823 = sum of:
        0.06924283 = product of:
          0.20772848 = sum of:
            0.20772848 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.20772848 = score(doc=306,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.20772848 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20772848 = score(doc=306,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.20772848 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20772848 = score(doc=306,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.20772848 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20772848 = score(doc=306,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.5714286 = coord(4/7)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  4. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.34
    0.33914855 = product of:
      0.5935099 = sum of:
        0.059350993 = product of:
          0.17805298 = sum of:
            0.17805298 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.17805298 = score(doc=2918,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.17805298 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.17805298 = score(doc=2918,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.17805298 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.17805298 = score(doc=2918,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.17805298 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.17805298 = score(doc=2918,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.5714286 = coord(4/7)
    
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  5. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.28
    0.28262377 = product of:
      0.4945916 = sum of:
        0.04945916 = product of:
          0.14837748 = sum of:
            0.14837748 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.14837748 = score(doc=5895,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.14837748 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.14837748 = score(doc=5895,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.14837748 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.14837748 = score(doc=5895,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.14837748 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.14837748 = score(doc=5895,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.5714286 = coord(4/7)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  6. Aalberg, T.; Haugen, F.B.; Husby, O.: ¬A Tool for Converting from MARC to FRBR (2006) 0.04
    0.043379623 = product of:
      0.15182868 = sum of:
        0.1341085 = weight(_text_:interpretation in 2425) [ClassicSimilarity], result of:
          0.1341085 = score(doc=2425,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.6265196 = fieldWeight in 2425, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2425)
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 2425) [ClassicSimilarity], result of:
              0.035440356 = score(doc=2425,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 2425, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2425)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The FRBR model is by many considered to be an important contribution to the next generation of bibliographic catalogues, but a major challenge for the library community is how to use this model on already existing MARC-based bibliographic catalogues. This problem requires a solution for the interpretation and conversion of MARC records, and a tool for this kind of conversion is developed as a part of the Norwegian BIBSYS FRBR project. The tool is based on a systematic approach to the interpretation and conversion process and is designed to be adaptable to the rules applied in different catalogues.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  7. ¬Der große, exklusive TOMORROW-Text : Die beste Suchmaschine der Welt ... und der beste Web-Katalog ... und der beste Metasucher (2000) 0.04
    0.03570112 = product of:
      0.24990782 = sum of:
        0.24990782 = weight(_text_:quantenphysik in 1522) [ClassicSimilarity], result of:
          0.24990782 = score(doc=1522,freq=2.0), product of:
            0.34748885 = queryWeight, product of:
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.037368443 = queryNorm
            0.71918225 = fieldWeight in 1522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1522)
      0.14285715 = coord(1/7)
    
    Content
    Darunter Einzel-Beiträge zu: Acoon, Yahoo, MetaGer; Interviews mit den Suchmaschinen-Bossen zu: Wer ist der Lieblingskonkurrent?; So arbeitet eine Suchmaschine; KARZAUNINKAT, S.: So einfach finden sie, was Sie gerade wollen; 20 Fragen: Welcher Suchmaschinen-Typ sind Sie?; KARZAUNINKAT, S.: Kontrolle ist der beste Schutz; BETZ, S.: Darum suchen Sie kostenlos; GLASER, S.: Zwischen Quatsch und Quantenphysik; Suchmaschinen für Spezialfragen
  8. Levinson, R.: Symmetry and the computation of conceptual structures (2000) 0.03
    0.032156922 = product of:
      0.112549216 = sum of:
        0.09482904 = weight(_text_:interpretation in 5081) [ClassicSimilarity], result of:
          0.09482904 = score(doc=5081,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 5081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5081)
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 5081) [ClassicSimilarity], result of:
              0.035440356 = score(doc=5081,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 5081, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5081)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The discovery and exploitation of symmetry plays a major role in sciences such as crystallography, quantum theory, condensedmatter physics, thermodynamics, chemistry, biology and others. It then should not be surprising then, since Conceptual Structures are proposed as a universal knowledge representation scheme, that symmetry should play a role in their interpretation and their application. In this tutorial style paper, we illustrate the role of symmetry in Conceptual Structures and how algorithms may be constructed that exploit this symmetry in order to achieve computational efficiency
    Date
    3. 9.2000 19:22:45
  9. Weizenbaum, J.: Wir gegen die Gier (2008) 0.03
    0.02905614 = product of:
      0.101696484 = sum of:
        0.05747507 = weight(_text_:interpretation in 6983) [ClassicSimilarity], result of:
          0.05747507 = score(doc=6983,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.2685084 = fieldWeight in 6983, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6983)
        0.044221412 = sum of:
          0.029032689 = weight(_text_:anwendung in 6983) [ClassicSimilarity], result of:
            0.029032689 = score(doc=6983,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.16047385 = fieldWeight in 6983, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.0234375 = fieldNorm(doc=6983)
          0.015188723 = weight(_text_:22 in 6983) [ClassicSimilarity], result of:
            0.015188723 = score(doc=6983,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.116070345 = fieldWeight in 6983, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=6983)
      0.2857143 = coord(2/7)
    
    Content
    Als Beispiel der Anwendung von Metaphern in den Naturwissenschaften fällt mir dieses ein: Ein schwarzes Loch ist ein Stern, dessen Anziehungskraft so stark ist, dass keine Information entfliehen kann. Aber buchstäblich ist so ein Stern nicht "schwarz", noch ist er ein "Loch". Und Information, also elektromagnetische Teilchen, "entfliehen" den ordinären Sternen nicht. Mein Kollege Norbert Wiener schrieb einmal: "Information ist Information, nicht Materie oder Energie." Sie ist immer eine private Leistung, nämlich die der Interpretation, deren Ergebnis Wissen ist. Information hat, wie, zum Beispiel die Aufführung eines Tanzes, keine Permanenz; sie ist eben weder Materie noch Energie. Das Maß der Wahrheit des produzierten Wissens hängt von der Qualität der angewandten Interpretation ab. Wissen überlebt, nämlich indem es den denkenden Menschen buchstäblich informiert, also den Zustand seines Gehirns ändert. Claude Shannons Informationstheorie lehrt uns, dass die Bedeutung einer Nachricht von der Erwartung des Empfängers abhängt. Sie ist nicht messbar, denn Nachrichten sind pure Signale, die keine inhärente Bedeutung bergen. Enthält das New Yorker Telefonbuch Information? Nein! Es besteht aus Daten, nämlich aus Texten, die, um zu Information und Wissen zu werden, interpretiert werden müssen. Der Leser erwartet, dass gewisse Inhalte Namen, Adressen und Telefonnummern repräsentieren. Enthält dieses Telefonbuch die Information, dass viele Armenier nahe beieinander wohnen?
    Date
    16. 3.2008 12:22:08
  10. Morris, J.: Individual differences in the interpretation of text : implications for information science (2009) 0.03
    0.025964592 = product of:
      0.18175213 = sum of:
        0.18175213 = weight(_text_:interpretation in 3318) [ClassicSimilarity], result of:
          0.18175213 = score(doc=3318,freq=10.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.84909815 = fieldWeight in 3318, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=3318)
      0.14285715 = coord(1/7)
    
    Abstract
    Many tasks in library and information science (e.g., indexing, abstracting, classification, and text analysis techniques such as discourse and content analysis) require text meaning interpretation, and, therefore, any individual differences in interpretation are relevant and should be considered, especially for applications in which these tasks are done automatically. This article investigates individual differences in the interpretation of one aspect of text meaning that is commonly used in such automatic applications: lexical cohesion and lexical semantic relations. Experiments with 26 participants indicate an approximately 40% difference in interpretation. In total, 79, 83, and 89 lexical chains (groups of semantically related words) were analyzed in 3 texts, respectively. A major implication of this result is the possibility of modeling individual differences for individual users. Further research is suggested for different types of texts and readers than those used here, as well as similar research for different aspects of text meaning.
  11. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.03
    0.02526938 = product of:
      0.17688565 = sum of:
        0.17688565 = sum of:
          0.116130754 = weight(_text_:anwendung in 3895) [ClassicSimilarity], result of:
            0.116130754 = score(doc=3895,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.6418954 = fieldWeight in 3895, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.09375 = fieldNorm(doc=3895)
          0.06075489 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
            0.06075489 = score(doc=3895,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.46428138 = fieldWeight in 3895, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=3895)
      0.14285715 = coord(1/7)
    
    Date
    24. 8.2005 19:20:22
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  12. Springer, M.: Ist das Gehirn ein Quantencomputer? (2006) 0.03
    0.0252445 = product of:
      0.1767115 = sum of:
        0.1767115 = weight(_text_:quantenphysik in 5313) [ClassicSimilarity], result of:
          0.1767115 = score(doc=5313,freq=4.0), product of:
            0.34748885 = queryWeight, product of:
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.037368443 = queryNorm
            0.5085386 = fieldWeight in 5313, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5313)
      0.14285715 = coord(1/7)
    
    Content
    "Die unscheinbare graue Grütze im Schädel spiegelt jedem von uns eine private Welt vor, deren sinnlicher Reichtum noch das kühnste Multiversum übertrifft, das ein Kosmologe sich auszudenken vermag. Irgendwie bringt die feuchte, lauwarme Masse es fertig, uns das letztlich unbeschreibliche Erleben von Farben, Tönen und Gerüchen, von Schmerzen und Stimmungen zu bescheren. Nur wie? Das ist die berüchtigte Erklärungslücke der Bewusstseinsforschung. Manche sehen darin ein Scheinproblem, andere ein unlösbares Rätsel, und wieder andere mühen sich um einen Brückenschlag zwischen objektiver Hirntätigkeit und subjektiven »Qualia«. Einige Brückenbauer suchen dabei ihr Heil in der Quantenphysik. In der Tat haben Bewusstseins- und Quantenphänomene auf den ersten Blick etwas Entscheidendes gemeinsam: Beide sind »holistisch«. Qualia werden als Ganzheiten erlebt, nicht als Stückwerk verschiedener Sinnesdaten. Analog lassen sich typische Quantenzustände - anders als klassische Mehrteilchensysteme - nicht als bloße Summe der Zustände der beteiligten Partikel beschreiben, weshalb Physiker sie als nichtlokal, kohärent oder verschränkt bezeichnen. Außerdem sah es zumindest anfangs so aus, als enthalte die Quantentheorie eine subjektive Komponente. Gemäß der Kopenhagener Deutung hat es keinen Sinn, von der Existenz einer Teilcheneigenschaft zu sprechen, bevor sie beobachtet wird. Einige Interpreten gingen sogar so weit, unter Beobachtung nicht die Wechselwirkung zwischen Quantenobjekt und Messgerät zu verstehen, sondern den Eintritt des Messresultats ins Bewusstsein des Beobachters. Diese vagen Analogien nährten die Hoffnung, mit überlagerten Quantenzuständen die Erklärungslücke der Hirnforschung schließen zu können. Prominentester Hoffnungsträger ist dabei der Mathematiker und Gravitationstheoretiker Roger Penrose. Seiner Überzeugung nach wird eine künftigeTheorie der Quantengravitation nicht nur das Messproblem der Quantenmechanik lösen, sondern auch eine Physik des Bewusstseins begründen. So allgemein hat diese Idee einen gewissen Charme. Die Synthese von Quantenphysik und Gravitationstheorie wird derzeit in so abstrakten Gebilden gesucht wie Strings oder Loop-Quanten, und wer wollte ausschließen, dass bei dieser großen Vereinigung auch etwas für die Bewusstseinsforschung abfällt. Doch hat sich Penrose von dem Anästhesisten Stuart Hameroff einreden lassen, in den Mikrotubuli, langen Röhrenmolekülen im Zellskelett, den Sitz des Quantenbewusstseins zu vermuten. Das war ein Fehler. Über die noch nicht existente Quantengravitation lässt sich trefflich spekulieren, und Penrose gilt als Fachmann beim Skizzieren ihrer möglichen Umrisse. In den Niederungen der konkreten Hirnforschung dagegen ist er blutiger Laie. Und so fing er sich denn auch jetzt eine volle Breitseite des Hirnforschers Christof Koch ein, der in einem Beitrag in »Nature« (Bd. 440, S. 611) die Schwachpunkte in den kühnen Gedankenflügen des Mathematikers bloßstellt. Die Effekte der Quantenmechanik machen sich in aller Regel nur im submikroskopischen Bereich bemerkbar. Zudem sind kohärente Mehrteilchenzustände extrem störanfällig, sodass sie sich bisher lediglich mit wenigen Partikeln oder bei extrem tiefen Temperaturen erzeugen ließen. Zellmoleküle haben im Vergleich dazu, so Koch, riesige Ausmaße. Obendrein ist das Gehirn bei seiner Betriebstemperatur-300 Grad über absolut null -für nutzbare Quanteneffekte viel zu heiß. Obwohl ich gelernter Physiker und Penrose als Übersetzer seines Buchs »Computerdenken« durchaus gewogen bin, muss ich mich dieser Argumentation beugen. Die Erklärungslücke wird sich wohl nicht »da unten«, auf der Mikroebene der Quantenwelt, schließen, sondern klafft »hier oben«, auf dem makroskopischen Niveau neuronaler Netze. Wir müssen uns eben damit abfinden, dass unsere elementarsten Erlebnisse Produkt der komplexesten Prozesse überhaupt sind."
  13. Frâncu, V.: ¬An interpretation of the FRBR model (2004) 0.02
    0.024788357 = product of:
      0.08675925 = sum of:
        0.07663343 = weight(_text_:interpretation in 2647) [ClassicSimilarity], result of:
          0.07663343 = score(doc=2647,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.35801122 = fieldWeight in 2647, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=2647)
        0.010125816 = product of:
          0.020251632 = sum of:
            0.020251632 = weight(_text_:22 in 2647) [ClassicSimilarity], result of:
              0.020251632 = score(doc=2647,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.15476047 = fieldWeight in 2647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2647)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Despite the existence of a logical structural model for bibliographic records which integrates any record type, library catalogues persist in offering catalogue records at the level of 'items'. Such records however, do not clearly indicate which works they contain. Hence the search possibilities of the end user are unduly limited. The Functional Requirements for Bibliographic Records (FRBR) present through a conceptual model, independent of any cataloguing code or implementation, a globalized view of the bibliographic universe. This model, a synthesis of the existing cataloguing rules, consists of clearly structured entities and well defined types of relationships among them. From a theoretical viewpoint, the model is likely to be a good knowledge organiser with great potential in identifying the author and the work represented by an item or publication and is able to link different works of the author with different editions, translations or adaptations of those works aiming at better answering the user needs. This paper is presenting an interpretation of the FRBR model opposing it to a traditional bibliographic record of a complex library material.
    Date
    17. 6.2015 14:40:22
  14. Johnson, E.H.: Objects for distributed heterogeneous information retrieval (2000) 0.02
    0.022969227 = product of:
      0.08039229 = sum of:
        0.067735024 = weight(_text_:interpretation in 6959) [ClassicSimilarity], result of:
          0.067735024 = score(doc=6959,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 6959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.01265727 = product of:
          0.02531454 = sum of:
            0.02531454 = weight(_text_:22 in 6959) [ClassicSimilarity], result of:
              0.02531454 = score(doc=6959,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.19345059 = fieldWeight in 6959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6959)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The success of the World Wide Web Shows that we can access, search, and retrieve information from globally distributed databases. lf a database, such as a library catalog, has some sort of Web-based front end, we can type its URL into a Web browser and use its HTML-based forms to search for items in that database. Depending an how well the query conforms to the database content, how the search engine interprets the query, and how the server formats the results into HTML, we might actually find something usable. While the first two issues depend an ourselves and the server, an the Web the latter falls to the mercy of HTML, which we all know as a great destroyer of information because it codes for display but not for content description. When looking at an HTML-formatted display, we must depend an our own interpretation to recognize such entities as author names, titles, and subject identifiers. The Web browser can do nothing but display the information. lf we want some other view of the result, such as sorting the records by date (provided it offers such an option to begin with), the server must do it. This makes poor use of the computing power we have at the desktop (or even laptop), which, unless it involves retrieving more records, could easily do the result Set manipulation that we currently send back to the server. Despite having personal computers wich immense computational power, as far as information retrieval goes, we still essentially use them as dumb terminals.
    Date
    22. 9.1997 19:16:05
  15. Bartlett, J.C.; Toms, E.G.: Developing a protocol for bioinformatics analysis : an integrated information behavior and task analysis approach (2005) 0.02
    0.022969227 = product of:
      0.08039229 = sum of:
        0.067735024 = weight(_text_:interpretation in 5256) [ClassicSimilarity], result of:
          0.067735024 = score(doc=5256,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 5256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5256)
        0.01265727 = product of:
          0.02531454 = sum of:
            0.02531454 = weight(_text_:22 in 5256) [ClassicSimilarity], result of:
              0.02531454 = score(doc=5256,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.19345059 = fieldWeight in 5256, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5256)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The purpose of this research is to capture, understand, and model the process used by bioinformatics analysts when facing a specific scientific problem. Integrating information behavior with task analysis, we interviewed 20 bioinformatics experts about the process they follow to conduct a typical bioinformatics analysis - a functional analysis of a gene, and then used a task analysis approach to model that process. We found that each expert followed a unique process in using bioinformatics resources, but had significant similarities with their peers. We synthesized these unique processes into a standard research protocol, from which we developed a procedural model that describes the process of conducting a functional analysis of a gene. The model protocol consists of a series of 16 individual steps, each of which specifies detail for the type of analysis, how and why it is conducted, the tools used, the data input and output, and the interpretation of the results. The linking of information behavior and task analysis research is a novel approach, as it provides a rich high-level view of information behavior while providing a detailed analysis at the task level. In this article we concentrate on the latter.
    Date
    22. 7.2006 14:28:55
  16. Franken, G.: Weglassen öffnet den Weg zur Welt : BuchMalerei und Wortarchitektur von Elisabeth Jansen in Küchenhof-Remise (2004) 0.02
    0.022969227 = product of:
      0.08039229 = sum of:
        0.067735024 = weight(_text_:interpretation in 1820) [ClassicSimilarity], result of:
          0.067735024 = score(doc=1820,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 1820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1820)
        0.01265727 = product of:
          0.02531454 = sum of:
            0.02531454 = weight(_text_:22 in 1820) [ClassicSimilarity], result of:
              0.02531454 = score(doc=1820,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.19345059 = fieldWeight in 1820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1820)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Content
    "Die Welt des Informationszeitalters dringt auf den Menschen immer heftiger und geballter ein - und zugleich immer verschlüsselter, codierter, gefiltert und vermittelt als Medienbotschaft. Die Auseinandersetzung mit diesen Zeichen, ob in visualisierender Form oder textlich gefasst, hat die Hebborner Künstlerin Elisabeth Jansen seit Jahren in den Mittelpunkt ihrer meist kleinformatigen Arbeiten gestellt: Dem Informationsüberfluss, der sich in seiner lautstarken Überlagerung zum Informationsrauschen entwickelt, begegnet sie mit einem energischen und impulgesteuerten Auswahl- und Reduzierungsschritt, wobei Collage und farbliche Abdeckung die bevorzugten Instrumente der Datenzähmung darstellen. In der Remise des Altenberger Küchenhofes sind Ergebnisse ihrer enorm fruchtbaren Produktion aus den letzten Jahren unter dem Titel "BuchMalerei und Wortarchitektur" bis zum 13. Juni zu besichtigen. Ausgangspunkt ihrer Werke ist oft ein irgendwie gestaltetes Papier: ein Kalenderblatt, ein Prospekt, eine Kunstpost karte, die unter dem Lackstift ihre Physiognomie verliert. Die Konturen der Auslassungen, die dem Untergrund erlauben, noch partiell in Erscheinung zu treten, zwingen dem Träger eine völlig neue Interpretation auf. Es bilden sich Umrisse und Gestalten von fragmentarisierter, torsohafter Form, die nur in Ausnahmefällen vorgefundene Elemente als formalisierte Struktur aufgreifen und einbinden, durch zeichnerische Figuren ergänzt. Welt erhält neue Bedeutung, wird auf dem Papier - eigentlich im Kopf - neu konstruiert. Besondere Dichte erhält diese Neuformulierung in den Künstlerbüchern; in denen Elisabeth Jansen über einen längeren Zeitraum tagebuchoder albumartig einen solchen Transformationsprozess fortführt. Diese annalistisch fortlaufenden Kommentare finden ihre Zuspitzung in den Wortarchitekturen", in denen einzelne Worte oder Zeilenbruchstücke aus der Tageslektüre ausgewählt und zu neuen Textkörpern addiert werden: Welt wird zu einem Kaleidoskop von Eindrücken, die sich zufälligen Standpunkten verdanken. Was bleibt, entscheidet das Subjekt selbst."
    Date
    3. 5.1997 8:44:22
  17. Procházka, D.: ¬The development of uniform titles for choreographic works (2006) 0.02
    0.021895267 = product of:
      0.15326686 = sum of:
        0.15326686 = weight(_text_:interpretation in 223) [ClassicSimilarity], result of:
          0.15326686 = score(doc=223,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.71602243 = fieldWeight in 223, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0625 = fieldNorm(doc=223)
      0.14285715 = coord(1/7)
    
    Abstract
    In 1994, the Library of Congress issued a rule interpretation to AACR2 detailing how uniform titles for choreographic works should be established. The value of the rule interpretation is discussed, and it is contrasted with prior practices. The origins of the concept behind the rule are traced back to the New York Public Library in the mid twentieth century, and its evolution into the current guidelines is delineated.
  18. Rindflesch, T.C.; Fizsman, M.: The interaction of domain knowledge and linguistic structure in natural language processing : interpreting hypernymic propositions in biomedical text (2003) 0.02
    0.019352864 = product of:
      0.13547005 = sum of:
        0.13547005 = weight(_text_:interpretation in 2097) [ClassicSimilarity], result of:
          0.13547005 = score(doc=2097,freq=8.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.6328804 = fieldWeight in 2097, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2097)
      0.14285715 = coord(1/7)
    
    Abstract
    Interpretation of semantic propositions in free-text documents such as MEDLINE citations would provide valuable support for biomedical applications, and several approaches to semantic interpretation are being pursued in the biomedical informatics community. In this paper, we describe a methodology for interpreting linguistic structures that encode hypernymic propositions, in which a more specific concept is in a taxonomic relationship with a more general concept. In order to effectively process these constructions, we exploit underspecified syntactic analysis and structured domain knowledge from the Unified Medical Language System (UMLS). After introducing the syntactic processing on which our system depends, we focus on the UMLS knowledge that supports interpretation of hypernymic propositions. We first use semantic groups from the Semantic Network to ensure that the two concepts involved are compatible; hierarchical information in the Metathesaurus then determines which concept is more general and which more specific. A preliminary evaluation of a sample based on the semantic group Chemicals and Drugs provides 83% precision. An error analysis was conducted and potential solutions to the problems encountered are presented. The research discussed here serves as a paradigm for investigating the interaction between domain knowledge and linguistic structure in natural language processing, and could also make a contribution to research on automatic processing of discourse structure. Additional implications of the system we present include its integration in advanced semantic interpretation processors for biomedical text and its use for information extraction in specific domains. The approach has the potential to support a range of applications, including information retrieval and ontology engineering.
  19. Ohly, H.P.: Erstellung und Interpretation von semantischen Karten am Beispiel des Themas 'Soziologische Beratung' (2004) 0.02
    0.019158358 = product of:
      0.1341085 = sum of:
        0.1341085 = weight(_text_:interpretation in 3176) [ClassicSimilarity], result of:
          0.1341085 = score(doc=3176,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.6265196 = fieldWeight in 3176, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3176)
      0.14285715 = coord(1/7)
    
    Abstract
    Bei der Analyse von Informationsströmen und -systemen mit statistischen Methoden werden die Ergebnisse gerne in Grafiken dargestellt, da diese intuitiv schneller zu erfassen sind und auch Laien ohne tiefere statistische Vorbildung eine Anschauung bekommen können. Klassisches Beispiel ist etwa die graphische Darstellung der Verluste des napoleonischen Heeres in Russland (Abb. 1). Unbeachtet bleibt dabei oft, dass trotz Einfachheit der Darstellung meist große Mengen von Daten herangezogen werden und diese dann lediglich nach wenigen Gesichtspunkten in eine Grafik projiziert werdens, was leicht auch zu Fehleinschätzungen führen kann. Es sind darum geeignete Verfahren auszuwählen, die eine adäquate und möglichst 'objektive' Interpretation ermöglichen.
  20. Bean, C.A.: Representation of medical knowledge for automated semantic interpretation of clinical reports (2004) 0.02
    0.018961856 = product of:
      0.13273299 = sum of:
        0.13273299 = weight(_text_:interpretation in 2660) [ClassicSimilarity], result of:
          0.13273299 = score(doc=2660,freq=12.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.6200936 = fieldWeight in 2660, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=2660)
      0.14285715 = coord(1/7)
    
    Abstract
    A set of cardiac catheterisation case reports was analysed to identify and encode for automated interpretation of the semantic indicators of location and severity of disease in coronary arteries. Presence of disease was indicated by the use of specific or general disease terms, typically with a modifier, while absence of disease was indicated by negation of similar phrases. Disease modifiers indicating severity could be qualitative or quantitative, and a 7-point severity scale was devised to normalise these modifiers based an relative clinical significance. Location of disease was indicated in three basic ways: By situation in arbitrary topographic divisions, by situation relative to a named structure, or by using named structures as boundary delimiters to describe disease extent. In addition, semantic indicators were identified for such topological relationships as proximity, contiguity, overlap, and enclosure. Spatial reasoning was often necessary to understand the specific localisation of disease, demonstrating the need for a general Spatial extension to the underlying knowledge base.
    Content
    1. Introduction In automated semantic interpretation, the expressions in natural language text are mapped to a knowledge model, thus providing a means of normalising the relevant concepts and relationships encountered. However, the ultimate goal of comprehensive and consistent semantic interpretation of unrestrained text, even within a single domain such as medicine, is still beyond the current state of the art of natural language processing. In order to scale back the complexity of the task of automated semantic interpretation, we have restricted our domain of interest to coronary artery anatomy and our text to cardiac catheterisation reports. Using a multi-phased approach, a staged series of projects is enhancing the development of a semantic interpretation system for free clinical text in the specific subdomain of coronary arteriography.

Languages

Types

Themes