Search (38 results, page 1 of 2)

  • × theme_ss:"Begriffstheorie"
  • × year_i:[2000 TO 2010}
  1. Voß, V.: Denken, verstehen, wissen : eine sprachvergleichende Untersuchung zu lexikalischen Bezeichnungen mentaler Tätigkeiten, Vorgänge und Zustände (2009) 0.07
    0.06654282 = product of:
      0.24953555 = sum of:
        0.054854408 = weight(_text_:allgemeines in 504) [ClassicSimilarity], result of:
          0.054854408 = score(doc=504,freq=4.0), product of:
            0.12306474 = queryWeight, product of:
              5.705423 = idf(docFreq=399, maxDocs=44218)
              0.021569785 = queryNorm
            0.44573617 = fieldWeight in 504, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.705423 = idf(docFreq=399, maxDocs=44218)
              0.0390625 = fieldNorm(doc=504)
        0.017560039 = weight(_text_:und in 504) [ClassicSimilarity], result of:
          0.017560039 = score(doc=504,freq=18.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.3673144 = fieldWeight in 504, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=504)
        0.0038187557 = weight(_text_:in in 504) [ClassicSimilarity], result of:
          0.0038187557 = score(doc=504,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1301535 = fieldWeight in 504, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=504)
        0.017560039 = weight(_text_:und in 504) [ClassicSimilarity], result of:
          0.017560039 = score(doc=504,freq=18.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.3673144 = fieldWeight in 504, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=504)
        0.05837661 = weight(_text_:einzelne in 504) [ClassicSimilarity], result of:
          0.05837661 = score(doc=504,freq=4.0), product of:
            0.12695427 = queryWeight, product of:
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.021569785 = queryNorm
            0.4598239 = fieldWeight in 504, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.0390625 = fieldNorm(doc=504)
        0.05837661 = weight(_text_:einzelne in 504) [ClassicSimilarity], result of:
          0.05837661 = score(doc=504,freq=4.0), product of:
            0.12695427 = queryWeight, product of:
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.021569785 = queryNorm
            0.4598239 = fieldWeight in 504, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.0390625 = fieldNorm(doc=504)
        0.03758053 = weight(_text_:deutsche in 504) [ClassicSimilarity], result of:
          0.03758053 = score(doc=504,freq=4.0), product of:
            0.10186133 = queryWeight, product of:
              4.7224083 = idf(docFreq=1068, maxDocs=44218)
              0.021569785 = queryNorm
            0.36893815 = fieldWeight in 504, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7224083 = idf(docFreq=1068, maxDocs=44218)
              0.0390625 = fieldNorm(doc=504)
        0.0014085418 = weight(_text_:s in 504) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=504,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 504, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=504)
      0.26666668 = coord(8/30)
    
    Abstract
    Diese onomasiologische Arbeit untersucht sprachvergleichend lexikalische Benennungen mentaler Tätigkeiten (wie z.B. denken), Vorgänge (verstehen) und Zustände (wissen). Dabei stehen Gemeinsamkeiten und Unterschiede der sprachlichen Erschließung dieser Bereiche im Blickpunkt. Bereits im deutschen Wortschatz zeigen sich verschiedene Motivationsverhältnisse: von relativ durchsichtigen Ausdrücken (begreifen, erwägen) über als komplex analysierbare, aber nicht eigentlich durchschaubare Ausdrücke (überlegen, verstehen) bis zu undurchsichtigen Simplizia (denken, wissen). Die Leitfrage lautet: Welche Bilder wiederholen sich durch verschiedene Sprachen und Kulturen? Gibt es bestimmte Bahnen, in denen sich die bezeichnungsmäßige Erschließung bewegt? Es zeigt sich, daß es sich um einen sehr heterogenen Bereich mit zahlreichen Bezeichnungsmustern handelt, von denen sich aber drei Muster - Bezeichnungen aus den Quellbereichen GREIFEN/NEHMEN, SEHEN und HÖREN - als stark verbreitet in verschiedenen Unterbereichen und in unterschiedlichen Sprachen herauskristallisieren.
    BK
    17.30 Psycholinguistik: Allgemeines
    18.00 Einzelne Sprachen und Literaturen allgemein
    Classification
    GC 9403: Wortfeldtheorie (Wortfeldforschung) / Germanistik. Niederlandistik. Skandinavistik / Deutsche Sprache
    17.30 Psycholinguistik: Allgemeines
    18.00 Einzelne Sprachen und Literaturen allgemein
    Imprint
    Münster : Verl-Haus Monsenstein und Vannerdat
    Pages
    524 S
    RVK
    GC 9403: Wortfeldtheorie (Wortfeldforschung) / Germanistik. Niederlandistik. Skandinavistik / Deutsche Sprache
  2. Schmitz-Esser, W.: EXPO-INFO 2000 : Visuelles Besucherinformationssystem für Weltausstellungen (2000) 0.03
    0.03376455 = product of:
      0.1447052 = sum of:
        0.03642706 = weight(_text_:buch in 1404) [ClassicSimilarity], result of:
          0.03642706 = score(doc=1404,freq=4.0), product of:
            0.10028592 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.021569785 = queryNorm
            0.36323205 = fieldWeight in 1404, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1404)
        0.02110454 = weight(_text_:und in 1404) [ClassicSimilarity], result of:
          0.02110454 = score(doc=1404,freq=26.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.441457 = fieldWeight in 1404, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1404)
        0.024179846 = weight(_text_:informationswissenschaft in 1404) [ClassicSimilarity], result of:
          0.024179846 = score(doc=1404,freq=2.0), product of:
            0.09716552 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.021569785 = queryNorm
            0.24885213 = fieldWeight in 1404, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1404)
        0.006614278 = weight(_text_:in in 1404) [ClassicSimilarity], result of:
          0.006614278 = score(doc=1404,freq=18.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.22543246 = fieldWeight in 1404, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1404)
        0.033282977 = weight(_text_:einzelnen in 1404) [ClassicSimilarity], result of:
          0.033282977 = score(doc=1404,freq=2.0), product of:
            0.1139978 = queryWeight, product of:
              5.285069 = idf(docFreq=608, maxDocs=44218)
              0.021569785 = queryNorm
            0.29196155 = fieldWeight in 1404, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.285069 = idf(docFreq=608, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1404)
        0.02110454 = weight(_text_:und in 1404) [ClassicSimilarity], result of:
          0.02110454 = score(doc=1404,freq=26.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.441457 = fieldWeight in 1404, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1404)
        0.001991979 = weight(_text_:s in 1404) [ClassicSimilarity], result of:
          0.001991979 = score(doc=1404,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08494043 = fieldWeight in 1404, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1404)
      0.23333333 = coord(7/30)
    
    Abstract
    Das aktuelle Wissen der Welt im Spiegel einer Weltausstellung: Wie stellt man das dar und wie macht man es Interessierten zugänglich - in der Ausstellung, in Publikationen, im Funk und über das Internet? Was man alles auf einer Weltausstellung an der Schwelle zum dritten Jahrtausend sehen und erfahren kann, sprengt in Fülle und Vielfalt jeden individuell faßbaren Rahmen. Schmitz-Esser zeigt in seinem Buch, wie der Besucher wahlweise in vier Sprachen die Weltausstellung erleben und die Quintessenz davon mitnehmen kann. Ermöglicht wird dies durch das Konzept des virtuellen "Wissens in der Kapsel", das so aufbereitet ist, daß es in allen gängigen medialen Formen und für unterschiedlichste Wege der Aneignung eingesetzt werden kann. Die Lösung ist nicht nur eine Sache der Informatik und Informationstechnologie, sondern vielmehr auch eine Herausforderung an Informationswissenschaft und Computerlinguistik. Das Buch stellt Ziel, Ansatz, Komponenten und Voraussetzungen dafür dar.
    Content
    Willkommene Anregung schon am Eingang.- Vertiefung des Wissens während der Ausstellung.- Alles für das Wohlbefinden.- Die Systemstruktur und ihre einzelnen Elemente.- Wovon alles ausgeht.- Den Stoff als Topics und Subtopics strukturieren.- Die Nutshells.- Der Proxy-Text.Der Thesaurus.- Gedankenraumreisen.- Und zurück in die reale Welt.- Weitergehende Produkte.- Das EXPO-Infosystem auf einen Blick.- Register.- Literaturverzeichnis.
    Footnote
    Rez.in: KO 29(2002) no.2, S.103-104 (G.J.A. Riesthuis)
    Pages
    XI,119 S
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Konzeption und Anwendung des Prinzips Thesaurus
  3. Seiler, T.B.: Begreifen und Verstehen (2001) 0.02
    0.02406431 = product of:
      0.12032155 = sum of:
        0.03642706 = weight(_text_:buch in 6893) [ClassicSimilarity], result of:
          0.03642706 = score(doc=6893,freq=4.0), product of:
            0.10028592 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.021569785 = queryNorm
            0.36323205 = fieldWeight in 6893, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6893)
        0.021901216 = weight(_text_:und in 6893) [ClassicSimilarity], result of:
          0.021901216 = score(doc=6893,freq=28.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.45812157 = fieldWeight in 6893, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6893)
        0.0054005357 = weight(_text_:in in 6893) [ClassicSimilarity], result of:
          0.0054005357 = score(doc=6893,freq=12.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18406484 = fieldWeight in 6893, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6893)
        0.033282977 = weight(_text_:einzelnen in 6893) [ClassicSimilarity], result of:
          0.033282977 = score(doc=6893,freq=2.0), product of:
            0.1139978 = queryWeight, product of:
              5.285069 = idf(docFreq=608, maxDocs=44218)
              0.021569785 = queryNorm
            0.29196155 = fieldWeight in 6893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.285069 = idf(docFreq=608, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6893)
        0.021901216 = weight(_text_:und in 6893) [ClassicSimilarity], result of:
          0.021901216 = score(doc=6893,freq=28.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.45812157 = fieldWeight in 6893, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6893)
        0.0014085418 = weight(_text_:s in 6893) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=6893,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 6893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6893)
      0.2 = coord(6/30)
    
    Abstract
    Wissen ist wichtig. Heutzutage sind es gerade Wirtschaftsunternehmen, die erkannt haben, dass sie auf Kenntnisse und Bildung ihrer Mitarbeiter nicht verzichten können. Wissen tritt gleichberechtigt an die Seite von Arbeit und Kapital Gemeinsam bilden sie das Fundament für moderne Industrieunternehmen. Aber was ist eigentlich Wissen? Wie wird Wissen erworben und weitergegeben? Dies sind Fragen, auf die schon viele sehr unterschiedliche Antworten gegeben worden sind. Scheinbar selbstverständliche Vorgänge, wie Verstehen und Erkennen berühren in Wahrheit die Grundlagen unseres Denkens, und wie Denken eigenlich vor sich geht; ist trotz aller Erklärungsversuche der Biochemiker nicht zufrieden stellend beantwortet. Der Psychologe Thomas Bernhard Seiler lässt denn auch in seinem Buch "Begreifen und Verstehen" die biologischen Modelle außen vor. Er geht davon aus, dass Verstehen der Vorgang des Erkennens ist. 'Erkennen' aber in eine Vielzahl von einzelnen Prozessen zerfällt. Die Stücke und Einheiten, aus denen der Erkenntnisvorgang besteht, nennt Seiler "Begriffe". Wissen besteht demnach aus Begriffen. "Begriff" ist sein zentraler Begriff, und an diesem Satz wird deutlich, wie schwierig das Terrain ist, auf dem Seiler sich bewegt, denn die Erklärung solcher Worte wie "Begriff" enthält oft das zu erklärende Wort selbst. Er meistert diese Aufgabe in bewundernswert klarer und verständlicher Sprache, wobei sein Buch aber durchaus nicht einfach zu lesen ist - konzentriertes Mitdenken ist gefordert, wenn Seller seine Leser von überschaubaren ersten Definitionen zum Zeichencharakter von Sprache und dann zu den Begriffstheorien der Philosophie und Psychologie führt. Populärwissenschaft ist das nicht, wohl aber Wissenschaft für Leute mit solider Schulbildung. Trotz aller Theorie stellt Seiler auch immer wieder den Menschen in den Mittelpunkt und macht deutlich, dass dieser eben nicht programmierbar Ist wie ein Computer. Begriffsbildung, also die Aneignung von Wissen, ist in Wahrheit höchst komplex und sehr individuell.
    Pages
    244 S
  4. Fugmann, R.: ¬Die Nützlichkeit von semantischen Kategorien auf dem Gebiet der Informationsbereitstellung (2006) 0.02
    0.023918957 = product of:
      0.10250981 = sum of:
        0.014048031 = weight(_text_:und in 5867) [ClassicSimilarity], result of:
          0.014048031 = score(doc=5867,freq=18.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.29385152 = fieldWeight in 5867, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=5867)
        0.013394055 = product of:
          0.02678811 = sum of:
            0.02678811 = weight(_text_:bibliothekswesen in 5867) [ClassicSimilarity], result of:
              0.02678811 = score(doc=5867,freq=4.0), product of:
                0.09615103 = queryWeight, product of:
                  4.457672 = idf(docFreq=1392, maxDocs=44218)
                  0.021569785 = queryNorm
                0.2786045 = fieldWeight in 5867, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.457672 = idf(docFreq=1392, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5867)
          0.5 = coord(1/2)
        0.02678811 = weight(_text_:bibliothekswesen in 5867) [ClassicSimilarity], result of:
          0.02678811 = score(doc=5867,freq=4.0), product of:
            0.09615103 = queryWeight, product of:
              4.457672 = idf(docFreq=1392, maxDocs=44218)
              0.021569785 = queryNorm
            0.2786045 = fieldWeight in 5867, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.457672 = idf(docFreq=1392, maxDocs=44218)
              0.03125 = fieldNorm(doc=5867)
        0.02678811 = weight(_text_:bibliothekswesen in 5867) [ClassicSimilarity], result of:
          0.02678811 = score(doc=5867,freq=4.0), product of:
            0.09615103 = queryWeight, product of:
              4.457672 = idf(docFreq=1392, maxDocs=44218)
              0.021569785 = queryNorm
            0.2786045 = fieldWeight in 5867, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.457672 = idf(docFreq=1392, maxDocs=44218)
              0.03125 = fieldNorm(doc=5867)
        0.005849888 = weight(_text_:in in 5867) [ClassicSimilarity], result of:
          0.005849888 = score(doc=5867,freq=22.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.19937998 = fieldWeight in 5867, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=5867)
        0.014048031 = weight(_text_:und in 5867) [ClassicSimilarity], result of:
          0.014048031 = score(doc=5867,freq=18.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.29385152 = fieldWeight in 5867, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=5867)
        0.0015935833 = weight(_text_:s in 5867) [ClassicSimilarity], result of:
          0.0015935833 = score(doc=5867,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.06795235 = fieldWeight in 5867, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=5867)
      0.23333333 = coord(7/30)
    
    Abstract
    Unter den verschiedenen Möglichkeiten, das Wissen der Menschen zu ordnen, um sich den Überblick darüber zu erhalten, haben die Kategorien schon seit dem Altertum eine wichtige Rolle gespielt. Speziell auf dem Gebiet des Bibliothekswesens und der Informationsbereitstellung hat Ranganathan (1967) mit der Einführung von "Fundamental Categories" (Personality, Matter, Energy, Space, Time) schon in den 30-er Jahren des 20. Jahrhunderts neue Wege beschritten, ohne damit allerdings große Resonanz in der Fachwelt auszulösen zu können. Im traditionellen Bibliothekswesen hätte der Übergang auf ein kategoriengestütztes Informationssystem dieser Art wohl eine allzu große Umstellung bewirkt. Bei einem Neubeginn, wie er in der jüngeren Vergangenheit zu vielen Malen stattgefunden hat, hat man sich in Unkenntnis oder Abneigung dem Einsatz dieses ordnungschaffenden Werkzeuge verschlossen. Dabei lassen sich gerade bei der Verfügbarkeit von Computertechnologie große Fortschritte erzielen, wenn sich die Indexsprachen auf eine Gruppe von semantischen Kategorien stützen. Die Gestaltung und Fortentwicklung solcher Sprachen zu hoher Effizienz ist sogar ohne ein Fundament von semantischen Kategorien noch nicht gelungen und auch nicht denkbar. Bei den semantischen Kategorien handelt es sich um Begriffstypen, welche auf dem betreffenden Gebiet in den dortigen Publikationen und Fragestellungen eine besonders große Rolle spielen und deswegen besondere Aufmerksamkeit erfordern. In Ergänzung der Ausführungen von Bauer (2004) zu den Einsatzmöglichkeiten von Kategorien in der Wissensorganisation allgemein werden nachfolgend sieben Anwendungen eines Konzepts von semantischen Kategorien kurz erörtert. Sie haben in einem Großsystem zum Patentwesen in der Chemie zu einem Informationssystem von bisher noch immer unübertroffen großer Leistungsfähigkeit geführt. Das ursprüngliche ranganathansche Kategorienkonzept für das Bibliothekswesen ist dort dem Bedarf auf dem Gebiet der reinen und angewandten Chemie angepasst worden (vgl. Fugmann 1999, S. 23, 49-64). Es umfasst: STOFF, LEBEWESEN, Vorrichtung, VORGANG, ATTRIBUT von den vorgenannten kategorialen Gegenständen, d.h. Eigenschaft und Verwendung von Stoff, Lebewesen, Vorrichtung, Vorgang. Nachfolgend werden sieben solcher Einsatzmöglichkeiten in einem System zur gezielten Informationsbereitstellung aufgezählt: 1. Die Definition von schlagwörtern oder Deskriptoren 2. Der Inhalt der Indexsprache 3. Das Ordnen des Wortschatzes der Indexsprache 4. Der Unterteilungsgesichtspunkt 5. Die Begriffsanalyse 6. Die Begriffssynthese 7. Die Vermeidung unerfüllbarer Suchbedingungen
    Pages
    S.34-36
    Series
    Fortschritte in der Wissensorganisation; Bd.9
    Source
    Wissensorganisation und Verantwortung: Gesellschaftliche, ökonomische und technische Aspekte. Proceedings der 9. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Duisburg, 5.-7. November 2004. Hrsg. von H.P. Ohly u.a
  5. Rahmstorf, G.: Wortmodell und Begriffssprache als Basis des semantischen Retrievals (2000) 0.01
    0.014930005 = product of:
      0.08958003 = sum of:
        0.018323874 = weight(_text_:und in 5484) [ClassicSimilarity], result of:
          0.018323874 = score(doc=5484,freq=10.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.38329202 = fieldWeight in 5484, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5484)
        0.047873653 = weight(_text_:informationswissenschaft in 5484) [ClassicSimilarity], result of:
          0.047873653 = score(doc=5484,freq=4.0), product of:
            0.09716552 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.021569785 = queryNorm
            0.4927021 = fieldWeight in 5484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5484)
        0.0030866629 = weight(_text_:in in 5484) [ClassicSimilarity], result of:
          0.0030866629 = score(doc=5484,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.10520181 = fieldWeight in 5484, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5484)
        0.018323874 = weight(_text_:und in 5484) [ClassicSimilarity], result of:
          0.018323874 = score(doc=5484,freq=10.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.38329202 = fieldWeight in 5484, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5484)
        0.0019719584 = weight(_text_:s in 5484) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=5484,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 5484, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5484)
      0.16666667 = coord(5/30)
    
    Abstract
    Der heutigen Retrievaltechnik wird das Projekt eines semantisch basierten Suchsystems gegenübergestellt. Es soll genauer und vollständiger arbeiten sowie systematische Zusammenhänge zwischen Themen unterstützen. Bei diesem Ansatz wird ein umfassendes Wörterbuch mit einer einfachen begrifflichen Darstellung der Wortbedeutungen benötigt. Das Wortmodell bildet Wort, Wortmerkmale, Lemma, Wortbedeutungen (Lesarten), Lesartenmerkmale und Begriffe ab. Begriffe sind formale Ausdrücke einer Begriffssprache. Entsprechend dieser Differenzierung wird Lenunaindexierung, Lesartenindexierung und Begriffsindexierung unterschieden. Begriffe werden mit dem Programm Concepto grafisch konstruiert und erfasst
    Pages
    S.71-88
    Series
    Schriften zur Informationswissenschaft; Bd.38
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  6. Bauer, G.: ¬Die vielseitigen Anwendungsmöglichkeiten des Kategorienprinzips bei der Wissensorganisation (2006) 0.01
    0.0086558 = product of:
      0.051934797 = sum of:
        0.018323874 = weight(_text_:und in 5710) [ClassicSimilarity], result of:
          0.018323874 = score(doc=5710,freq=10.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.38329202 = fieldWeight in 5710, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5710)
        0.0030866629 = weight(_text_:in in 5710) [ClassicSimilarity], result of:
          0.0030866629 = score(doc=5710,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.10520181 = fieldWeight in 5710, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5710)
        0.018323874 = weight(_text_:und in 5710) [ClassicSimilarity], result of:
          0.018323874 = score(doc=5710,freq=10.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.38329202 = fieldWeight in 5710, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5710)
        0.0019719584 = weight(_text_:s in 5710) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=5710,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 5710, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5710)
        0.010228428 = product of:
          0.020456856 = sum of:
            0.020456856 = weight(_text_:22 in 5710) [ClassicSimilarity], result of:
              0.020456856 = score(doc=5710,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.2708308 = fieldWeight in 5710, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5710)
          0.5 = coord(1/2)
      0.16666667 = coord(5/30)
    
    Abstract
    Fast alle berühmten Philosophen der letzten Jahrhunderte haben sich mit den allgemeinen Begriffen befasst und sie sehr unterschiedlich formuliert und interpretiert. Eine Auswahl philosophischer Kategorien: - Platon (427-347): Sein, Identität, Verschiedenheit, Veränderung, Beharrung - Aristoteles (384-322): Substanz, Qualität, Quantität, Relation, Ort. Haben, Tun, Leiden - Kant (1724-1804): Qualität, Quantität, Relation, Modalität - Lotze (1891): Ding, Eigenschaft, Tätigkeit, Relation Unter Kategorien versteht man die allgemeinsten Stammbegriffe des Verstandes, unter welchen alle Gegenstände der Erfahrung fallen und von denen die übrigen Begriffe abgeleitet werden können. Für die Informationspraxis sind die ursprünglichen philosophischen Kategorien modifiziert worden.
    Pages
    S.22-33
    Series
    Fortschritte in der Wissensorganisation; Bd.9
    Source
    Wissensorganisation und Verantwortung: Gesellschaftliche, ökonomische und technische Aspekte. Proceedings der 9. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Duisburg, 5.-7. November 2004. Hrsg. von H.P. Ohly u.a
  7. Stock, W.: Begriffe und semantische Relationen in der Wissensrepräsentation (2009) 0.01
    0.006520972 = product of:
      0.048907287 = sum of:
        0.019866917 = weight(_text_:und in 3218) [ClassicSimilarity], result of:
          0.019866917 = score(doc=3218,freq=16.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.41556883 = fieldWeight in 3218, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3218)
        0.007483202 = weight(_text_:in in 3218) [ClassicSimilarity], result of:
          0.007483202 = score(doc=3218,freq=16.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.25504774 = fieldWeight in 3218, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3218)
        0.019866917 = weight(_text_:und in 3218) [ClassicSimilarity], result of:
          0.019866917 = score(doc=3218,freq=16.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.41556883 = fieldWeight in 3218, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3218)
        0.0016902501 = weight(_text_:s in 3218) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=3218,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 3218, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=3218)
      0.13333334 = coord(4/30)
    
    Abstract
    Begriffsorientiertes Information Retrieval bedarf einer informationswissenschaftlichen Theorie der Begriffe sowie der semantischen Relationen. Ein Begriff wird durch seine Intension und Extension sowie durch Definitionen bestimmt. Dem Problem der Vagheit begegnen wir durch die Einführung von Prototypen. Wichtige Definitionsarten sind die Begriffserklärung (nach Aristoteles) und die Definition über Familienähnlichkeiten (im Sinne Wittgensteins). Wir modellieren Begriffe als Frames (in der Version von Barsalou). Die zentrale paradigmatische Relation in Wissensordnungen ist die Hierarchie, die in verschiedene Arten zu gliedern ist: Hyponymie zerfällt in die Taxonomie und die einfache Hyponymie, Meronymie in eine ganze Reihe unterschiedlicher Teil-Ganzes-Beziehungen. Wichtig für praktische Anwendungen ist die Transitivität der jeweiligen Relation. Eine unspezifische Assoziationsrelation ist bei den angepeilten Anwendungen wenig hilfreich und wird durch ein Bündel von generalisierbaren und fachspezifischen Relationen ersetzt. Unser Ansatz fundiert neue Optionen der Anwendung von Wissensordnungen in der Informationspraxis neben ihrem "klassischen" Einsatz beim Information Retrieval: Erweiterung von Suchanfragen (Anwendung der semantischen Nähe), automatisches Schlussfolgern (Anwendung der terminologischen Logik in Vorbereitung eines semantischen Web) und automatische Berechnungen (bei Funktionalbegriffen mit numerischen Wertangaben).
    Source
    Information - Wissenschaft und Praxis. 60(2009) H.8, S.403-420
  8. Gerstenkorn, A.: Informationsbezug zwischen Gemein- und Fachsprache : Zum gemein- und fachsprachlichen Wort "Tal" (2006) 0.01
    0.0063552563 = product of:
      0.04766442 = sum of:
        0.020941569 = weight(_text_:und in 6013) [ClassicSimilarity], result of:
          0.020941569 = score(doc=6013,freq=10.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.438048 = fieldWeight in 6013, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=6013)
        0.003527615 = weight(_text_:in in 6013) [ClassicSimilarity], result of:
          0.003527615 = score(doc=6013,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.120230645 = fieldWeight in 6013, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6013)
        0.020941569 = weight(_text_:und in 6013) [ClassicSimilarity], result of:
          0.020941569 = score(doc=6013,freq=10.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.438048 = fieldWeight in 6013, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=6013)
        0.002253667 = weight(_text_:s in 6013) [ClassicSimilarity], result of:
          0.002253667 = score(doc=6013,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 6013, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=6013)
      0.13333334 = coord(4/30)
    
    Abstract
    Dokumentationssprachen setzen Terminologien und Ontologien ein. Terminologien enthalten viele Ausdrücke, die auch in der Alltagssprache (Gemeinsprache) benutzt werden. Handelt es sich hier um unterschiedlich exakt erklärte Ausdrücke oder um Polyseme? Das Wort "Tal" soll als Beispiel für diese Frage dienen und auch ein Beispiel für den Aufbau von Ontologien zum Zwecke der Dokumentation abgeben.
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.5, S.259-267
  9. Dahlberg, I.: Zur Begriffskultur in den Sozialwissenschaften : Evaluation einer Herausforderung (2006) 0.01
    0.0057664784 = product of:
      0.043248586 = sum of:
        0.018135931 = weight(_text_:und in 3128) [ClassicSimilarity], result of:
          0.018135931 = score(doc=3128,freq=30.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.3793607 = fieldWeight in 3128, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3128)
        0.005849888 = weight(_text_:in in 3128) [ClassicSimilarity], result of:
          0.005849888 = score(doc=3128,freq=22.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.19937998 = fieldWeight in 3128, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3128)
        0.018135931 = weight(_text_:und in 3128) [ClassicSimilarity], result of:
          0.018135931 = score(doc=3128,freq=30.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.3793607 = fieldWeight in 3128, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3128)
        0.0011268335 = weight(_text_:s in 3128) [ClassicSimilarity], result of:
          0.0011268335 = score(doc=3128,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.048049565 = fieldWeight in 3128, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=3128)
      0.13333334 = coord(4/30)
    
    Abstract
    Aufgrund eines Vortrags über Begriffs- und Definitionstheorie, den ich bei der Tagung über Begriffsanalyse in Darmstadt 1986 gehalten hatte (Dahlberg 1987), wandte sich der damalige Mitherausgeber der Zeitschrift Ethik und Sozialwissenschaften, Dr. Rainer Greshoff, 1990 an mich mit der Bitte, einen ähnlichen Beitrag als Hauptartikel für seine Zeitschrift zu schreiben. Ich sagte zu mit der Absicht im Hinterkopf, dabei auch meine Erfahrungen mit der sozial-wissenschaftlichen Terminologie, die ich bei COCTA, dem Committee for Conceptual and Terminological Analysis (Vorsitz Prof. Dr. Fred Riggs, Hawaii) (Riggs 1982) gemacht hatte, einzubringen. Hinzu kam, dass mir gerade zu diesem Zeitpunkt das Werk von Stefan Andreski in die Hände gefallen war, betitelt: "Die Hexenmeister der Sozialwissenschaften. Missbrauch, Mode und Manipulation einer Wissenschaft", (Andreski 1974) der sozusagen "kein Blatt vor den Mund nimmt" und überaus mutig und an vielen Beispielen die Misere der sozialwissenschaftlichen Terminologie offenbar macht. Ich hoffte daher, in einem entsprechenden Beitrag mehr Bewusstsein für eine begriffsorientierte, systematische Terminologie der Sozialwissenschaften zu wecken. In gewisser Weise war für mich dabei die Lösung von Prof. Riggs mit seiner "Onomantik" (Riggs 1985) vorbildlich. Er ging nämlich davon aus, dass der sog. semasiologische Ansatz, bei dem nach der Bedeutung eines Wortes gefragt wird, unbrauchbar für sein Verständnis sei (und das nicht nur in den Sozialwissenschaften), man müsse vielmehr umgekehrt onomasiologisch vorgehen und sich zunächst über einen Begriff klar werden, der mit einem Wort (oder einem Wort in einem Kontext) verbunden ist und seine mögliche Definition finden und dann erst dafür eine Benennung suchen. Aus Zeitmangel entstand mein Beitrag erst 1995. Herr Dr. Greshoff konnte - entsprechend der Methode seiner Zeitschrift - zu meinem Beitrag eine größere Anzahl von Kritikern finden und diese dann auch noch durch eine Replik der Autorin erwidern lassen und mit einer Metakritik eines Nichtinvolvierten das Ganze beenden. In meinem Fall waren es 27 Persönlichkeiten aus 10 verschiedenen Disziplinen und Herr Prof. Dr. Endruweit als Metakritiker. Der Beitrag umfasste die Seiten 3-91 (DIN A4 Format) in Heft 1-1996 unter dem Titel "Zur Begriffskultur in den Sozialwissenschaften. Lassen sich ihre Probleme lösen?" (Dahlberg 1996). Ich war überzeugt, dass sich ihre Probleme mit meiner vorgeschlagenen Methode lösen lassen. Doch meine Kritiker waren es leider nicht. Und über das Warum - davon wird mein Vortrag heute handeln.
    Pages
    S.2-11
    Series
    Fortschritte in der Wissensorganisation; Bd.9
    Source
    Wissensorganisation und Verantwortung: Gesellschaftliche, ökonomische und technische Aspekte. Proceedings der 9. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Duisburg, 5.-7. November 2004. Hrsg. von H.P. Ohly u.a
  10. Budin, G.: Begriffliche Wissensorganisation in den Sozialwissenschaften : Theorien- und Methodenvielfalt (2006) 0.01
    0.0053462614 = product of:
      0.040096957 = sum of:
        0.01638937 = weight(_text_:und in 5704) [ClassicSimilarity], result of:
          0.01638937 = score(doc=5704,freq=8.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.34282678 = fieldWeight in 5704, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5704)
        0.0053462577 = weight(_text_:in in 5704) [ClassicSimilarity], result of:
          0.0053462577 = score(doc=5704,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1822149 = fieldWeight in 5704, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5704)
        0.01638937 = weight(_text_:und in 5704) [ClassicSimilarity], result of:
          0.01638937 = score(doc=5704,freq=8.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.34282678 = fieldWeight in 5704, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5704)
        0.0019719584 = weight(_text_:s in 5704) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=5704,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 5704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5704)
      0.13333334 = coord(4/30)
    
    Abstract
    In diesem Beitrag möchte ich ein Thema aufgreifen, das bereits seit vielen Jahren diskutiert wird. Die Organisation sozialwissenschaftlichen Wissens auf begrifflicher Ebene ist Gegenstandsbereich sowohl für die Bibliotheks- und Informationswissenschaften bzw. für das Dokumentationswesen (Soergel 1971), als auch für die sozialwissenschaftliche Theoriendiskussion. Mit der Perspektive der terminologischen (d.h. also begrifflichen) Wissensorganisation haben wir den Vorteil, dass wir das Thema sowohl auf der theoretischen Ebene wie auch auf der praktischen Ebene gut miteinander verbinden können.
    Pages
    S.12-21
    Series
    Fortschritte in der Wissensorganisation; Bd.9
    Source
    Wissensorganisation und Verantwortung: Gesellschaftliche, ökonomische und technische Aspekte. Proceedings der 9. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Duisburg, 5.-7. November 2004. Hrsg. von H.P. Ohly u.a
  11. Rahmstorf, G.: Wege zur Ontologie (2006) 0.00
    0.004472444 = product of:
      0.033543326 = sum of:
        0.014048031 = weight(_text_:und in 5868) [ClassicSimilarity], result of:
          0.014048031 = score(doc=5868,freq=18.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.29385152 = fieldWeight in 5868, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=5868)
        0.0043204287 = weight(_text_:in in 5868) [ClassicSimilarity], result of:
          0.0043204287 = score(doc=5868,freq=12.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.14725187 = fieldWeight in 5868, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=5868)
        0.014048031 = weight(_text_:und in 5868) [ClassicSimilarity], result of:
          0.014048031 = score(doc=5868,freq=18.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.29385152 = fieldWeight in 5868, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=5868)
        0.0011268335 = weight(_text_:s in 5868) [ClassicSimilarity], result of:
          0.0011268335 = score(doc=5868,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.048049565 = fieldWeight in 5868, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=5868)
      0.13333334 = coord(4/30)
    
    Abstract
    Ontologie ist in der Philosophie die Auseinandersetzung mit der Frage, welche Arten von Gegenständen existieren und in welcher Weise sie existieren. Ein zentrales Thema ist z. B., ob es geistige Gegenstände, z. B. Gedanken, Ideen, Begriffe, Vorstellungen u. a. gibt und wie ihre Existenz begründet wird. Neuerdings wird mit dem Wort "Ontologie" eine andere, etwas konkretere Bedeutung verbunden. Unter einer Ontologie werden die allgemeinsten Begriffe eines Fachgebiets verstanden. So gehören zur Ontologie des Organischen u. a. "Pflanze", "Tier" und "Mensch". Wenn der Begriff "Tier" schon gegeben ist, lassen sich daraus andere Begriffe, z. B. "Wirbeltier", "Reptil", "Säugetier" usw. spezifizierend durch Angabe weiterer Merkmale bilden. Außerdem können Begriffsbeziehungen und andere formale Mittel zu Ontologien gehören. Ontologie ist danach die Methode, durch die die Begriffswelt eines Sachgebiets bestimmt wird, und insbesondere das Ergebnis dieser Methode: die Struktur aus Begriffen höchster Allgemeinheit, die mit dieser Methode erstellt wird. Das Wort "Ontologie" kann jedoch unterschiedlich verstanden wird. Es bezeichnet auch die Gesamtheit aller Begriffe, die zum Untersuchungsgegenstand gehören. Die Ontologie umfasst dann alles, was an Begriffen aus einer bestimmten Aufgabenstellung betrachtet bzw. erarbeitet wird. Zu einer Ontologie der Elektronik würden alle Begriffe dieses Gebietes gehören. Mit einem so weit gefassten Verständnis des Wortes "Ontologie" wird die Fokussierung auf die Kategorien größter Allgemeinheit der hierarchisch gegliederten Begriffswelt aufgegeben. Außerdem würde eine so verstandene Ontologie mit der Tatsache der Unabgrenzbarkeit des Wortschatzes belastet werden. Komposita können im Deutschen quasi unbegrenzt erweitert werden. Es gibt nicht nur den Kamin und den Kaminfeger, sondern auch die Kaminfegerarbeitskleidungsreinigungsfirma und dergleichen mehr. Komposita sind unverzichtbar. Ihre maximale Länge ist nicht festgelegt. Die Benennungen für Stoffe in der Chemie zeigen, dass man in dieser Fachsprache durchaus problemlos mit sehr langen Zusammensetzungen kommuniziert. Aber die Ontologen werden nicht daran interessiert sein, ihre Untersuchungen bis in die Tiefen aller Fachgebiete auszudehnen. Kurzum "Ontologie" sollte sich auf Begriffe höchster Allgemeinheit beziehen. Das entspricht der engeren Bedeutung des Wortes "Ontologie". Die Unterbegriffe dieser Top-Level-Begriffe können natürlich ebenfalls für Ontologen von Interesse sein, weil sie zeigen, welches Begriffsbildungspotenzial die verschiedenen Top-Level-Begriffe haben. Der Schwerpunkt des Interesses der Ontologen sollte aber auf den Begriffen liegen, die nicht weiter zurückführbar sind.
    Pages
    S.37-47
    Series
    Fortschritte in der Wissensorganisation; Bd.9
    Source
    Wissensorganisation und Verantwortung: Gesellschaftliche, ökonomische und technische Aspekte. Proceedings der 9. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Duisburg, 5.-7. November 2004. Hrsg. von H.P. Ohly u.a
  12. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.00
    0.0024889403 = product of:
      0.024889402 = sum of:
        0.0040832716 = weight(_text_:in in 5089) [ClassicSimilarity], result of:
          0.0040832716 = score(doc=5089,freq=14.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.13916893 = fieldWeight in 5089, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.018601414 = weight(_text_:deutsche in 5089) [ClassicSimilarity], result of:
          0.018601414 = score(doc=5089,freq=2.0), product of:
            0.10186133 = queryWeight, product of:
              4.7224083 = idf(docFreq=1068, maxDocs=44218)
              0.021569785 = queryNorm
            0.18261507 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7224083 = idf(docFreq=1068, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.002204717 = weight(_text_:s in 5089) [ClassicSimilarity], result of:
          0.002204717 = score(doc=5089,freq=10.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09401184 = fieldWeight in 5089, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
      0.1 = coord(3/30)
    
    Abstract
    The 8th International Conference on Conceptual Structures - Logical, Linguistic, and Computational Issues (ICCS 2000) brings together a wide range of researchers and practitioners working with conceptual structures. During the last few years, the ICCS conference series has considerably widened its scope on different kinds of conceptual structures, stimulating research across domain boundaries. We hope that this stimulation is further enhanced by ICCS 2000 joining the long tradition of conferences in Darmstadt with extensive, lively discussions. This volume consists of contributions presented at ICCS 2000, complementing the volume "Conceptual Structures: Logical, Linguistic, and Computational Issues" (B. Ganter, G.W. Mineau (Eds.), LNAI 1867, Springer, Berlin-Heidelberg 2000). It contains submissions reviewed by the program committee, and position papers. We wish to express our appreciation to all the authors of submitted papers, to the general chair, the program chair, the editorial board, the program committee, and to the additional reviewers for making ICCS 2000 a valuable contribution in the knowledge processing research field. Special thanks go to the local organizers for making the conference an enjoyable and inspiring event. We are grateful to Darmstadt University of Technology, the Ernst Schröder Center for Conceptual Knowledge Processing, the Center for Interdisciplinary Studies in Technology, the Deutsche Forschungsgemeinschaft, Land Hessen, and NaviCon GmbH for their generous support
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
    Pages
    315 S
    Type
    s
  13. Hjoerland, B.: Concept theory (2009) 0.00
    0.0023953489 = product of:
      0.017965116 = sum of:
        0.0058533465 = weight(_text_:und in 3461) [ClassicSimilarity], result of:
          0.0058533465 = score(doc=3461,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.12243814 = fieldWeight in 3461, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3461)
        0.0038187557 = weight(_text_:in in 3461) [ClassicSimilarity], result of:
          0.0038187557 = score(doc=3461,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1301535 = fieldWeight in 3461, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3461)
        0.0058533465 = weight(_text_:und in 3461) [ClassicSimilarity], result of:
          0.0058533465 = score(doc=3461,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.12243814 = fieldWeight in 3461, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3461)
        0.0024396663 = weight(_text_:s in 3461) [ClassicSimilarity], result of:
          0.0024396663 = score(doc=3461,freq=6.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.10403037 = fieldWeight in 3461, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3461)
      0.13333334 = coord(4/30)
    
    Abstract
    Concept theory is an extremely broad, interdisciplinary and complex field of research related to many deep fields with very long historical traditions without much consensus. However, information science and knowledge organization cannot avoid relating to theories of concepts. Knowledge organizing systems (e.g., classification systems, thesauri, and ontologies) should be understood as systems basically organizing concepts and their semantic relations. The same is the case with information retrieval systems. Different theories of concepts have different implications for how to construe, evaluate, and use such systems. Based on a post-Kuhnian view of paradigms, this article put forward arguments that the best understanding and classification of theories of concepts is to view and classify them in accordance with epistemological theories (empiricism, rationalism, historicism, and pragmatism). It is also argued that the historicist and pragmatist understandings of concepts are the most fruitful views and that this understanding may be part of a broader paradigm shift that is also beginning to take place in information science. The importance of historicist and pragmatic theories of concepts for information science is outlined.
    Footnote
    Vgl.: Szostak, R.: Comment on Hjørland's concept theory in: Journal of the American Society for Information Science and Technology. 61(2010) no.5, S. 1076-1077 und die Erwiderung darauf von B. Hjoerland (S.1078-1080)
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.8, S.1519-1536
  14. Olson, H.A.: How we construct subjects : a feminist analysis (2007) 0.00
    0.001532884 = product of:
      0.015328839 = sum of:
        0.006614278 = weight(_text_:in in 5588) [ClassicSimilarity], result of:
          0.006614278 = score(doc=5588,freq=18.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.22543246 = fieldWeight in 5588, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5588)
        0.0014085418 = weight(_text_:s in 5588) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=5588,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 5588, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5588)
        0.0073060202 = product of:
          0.0146120405 = sum of:
            0.0146120405 = weight(_text_:22 in 5588) [ClassicSimilarity], result of:
              0.0146120405 = score(doc=5588,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.19345059 = fieldWeight in 5588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5588)
          0.5 = coord(1/2)
      0.1 = coord(3/30)
    
    Abstract
    To organize information, librarians create structures. These structures grow from a logic that goes back at least as far as Aristotle. It is the basis of classification as we practice it, and thesauri and subject headings have developed from it. Feminist critiques of logic suggest that logic is gendered in nature. This article will explore how these critiques play out in contemporary standards for the organization of information. Our widely used classification schemes embody principles such as hierarchical force that conform to traditional/Aristotelian logic. Our subject heading strings follow a linear path of subdivision. Our thesauri break down subjects into discrete concepts. In thesauri and subject heading lists we privilege hierarchical relationships, reflected in the syndetic structure of broader and narrower terms, over all other relationships. Are our classificatory and syndetic structures gendered? Are there other options? Carol Gilligan's In a Different Voice (1982), Women's Ways of Knowing (Belenky, Clinchy, Goldberger, & Tarule, 1986), and more recent related research suggest a different type of structure for women's knowledge grounded in "connected knowing." This article explores current and potential elements of connected knowing in subject access with a focus on the relationships, both paradigmatic and syntagmatic, between concepts.
    Content
    Beitrag in einem Themenheft 'Gender Issues in Information Needs and Services'.
    Date
    11.12.2019 19:00:22
    Source
    Library trends. 56(2007) no.2, S.509-541
  15. Jouis, C.: Logic of relationships (2002) 0.00
    0.0013644554 = product of:
      0.013644554 = sum of:
        0.004929992 = weight(_text_:in in 1204) [ClassicSimilarity], result of:
          0.004929992 = score(doc=1204,freq=10.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.16802745 = fieldWeight in 1204, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1204)
        0.0014085418 = weight(_text_:s in 1204) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=1204,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 1204, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1204)
        0.0073060202 = product of:
          0.0146120405 = sum of:
            0.0146120405 = weight(_text_:22 in 1204) [ClassicSimilarity], result of:
              0.0146120405 = score(doc=1204,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.19345059 = fieldWeight in 1204, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1204)
          0.5 = coord(1/2)
      0.1 = coord(3/30)
    
    Abstract
    A main goal of recent studies in semantics is to integrate into conceptual structures the models of representation used in linguistics, logic, and/or artificial intelligence. A fundamental problem resides in the need to structure knowledge and then to check the validity of constructed representations. We propose associating logical properties with relationships by introducing the relationships into a typed and functional system of specifcations. This makes it possible to compare conceptual representations against the relationships established between the concepts. The mandatory condition to validate such a conceptual representation is consistency. The semantic system proposed is based an a structured set of semantic primitives-types, relations, and properties-based an a global model of language processing, Applicative and Cognitive Grammar (ACG) (Desc16s, 1990), and an extension of this model to terminology (Jouis & Mustafa 1995, 1996, 1997). The ACG postulates three levels of representation of languages, including a cognitive level. At this level, the meanings of lexical predicates are represented by semantic cognitive schemes. From this perspective, we propose a set of semantic concepts, which defines an organized system of meanings. Relations are part of a specification network based an a general terminological scheure (i.e., a coherent system of meanings of relations). In such a system, a specific relation may be characterized as to its: (1) functional type (the semantic type of arguments of the relation); (2) algebraic properties (reflexivity, symmetry, transitivity, etc.); and (3) combinatorial relations with other entities in the same context (for instance, the part of the text where a concept is defined).
    Date
    1.12.2002 11:12:22
    Pages
    S.127-140
  16. Conceptual structures : logical, linguistic, and computational issues. 8th International Conference on Conceptual Structures, ICCS 2000, Darmstadt, Germany, August 14-18, 2000 (2000) 0.00
    9.1432873E-4 = product of:
      0.009143287 = sum of:
        0.004183237 = weight(_text_:in in 691) [ClassicSimilarity], result of:
          0.004183237 = score(doc=691,freq=20.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.14257601 = fieldWeight in 691, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=691)
        0.003764863 = product of:
          0.011294588 = sum of:
            0.011294588 = weight(_text_:l in 691) [ClassicSimilarity], result of:
              0.011294588 = score(doc=691,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.13174236 = fieldWeight in 691, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=691)
          0.33333334 = coord(1/3)
        0.0011951874 = weight(_text_:s in 691) [ClassicSimilarity], result of:
          0.0011951874 = score(doc=691,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.050964262 = fieldWeight in 691, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0234375 = fieldNorm(doc=691)
      0.1 = coord(3/30)
    
    Abstract
    Computer scientists create models of a perceived reality. Through AI techniques, these models aim at providing the basic support for emulating cognitive behavior such as reasoning and learning, which is one of the main goals of the Al research effort. Such computer models are formed through the interaction of various acquisition and inference mechanisms: perception, concept learning, conceptual clustering, hypothesis testing, probabilistic inference, etc., and are represented using different paradigms tightly linked to the processes that use them. Among these paradigms let us cite: biological models (neural nets, genetic programming), logic-based models (first-order logic, modal logic, rule-based systems), virtual reality models (object systems, agent systems), probabilistic models (Bayesian nets, fuzzy logic), linguistic models (conceptual dependency graphs, language-based rep resentations), etc. One of the strengths of the Conceptual Graph (CG) theory is its versatility in terms of the representation paradigms under which it falls. It can be viewed and therefore used, under different representation paradigms, which makes it a popular choice for a wealth of applications. Its full coupling with different cognitive processes lead to the opening of the field toward related research communities such as the Description Logic, Formal Concept Analysis, and Computational Linguistic communities. We now see more and more research results from one community enrich the other, laying the foundations of common philosophical grounds from which a successful synergy can emerge. ICCS 2000 embodies this spirit of research collaboration. It presents a set of papers that we believe, by their exposure, will benefit the whole community. For instance, the technical program proposes tracks on Conceptual Ontologies, Language, Formal Concept Analysis, Computational Aspects of Conceptual Structures, and Formal Semantics, with some papers on pragmatism and human related aspects of computing. Never before was the program of ICCS formed by so heterogeneously rooted theories of knowledge representation and use. We hope that this swirl of ideas will benefit you as much as it already has benefited us while putting together this program
    Content
    Concepts and Language: The Role of Conceptual Structure in Human Evolution (Keith Devlin) - Concepts in Linguistics - Concepts in Natural Language (Gisela Harras) - Patterns, Schemata, and Types: Author Support through Formalized Experience (Felix H. Gatzemeier) - Conventions and Notations for Knowledge Representation and Retrieval (Philippe Martin) - Conceptual Ontology: Ontology, Metadata, and Semiotics (John F. Sowa) - Pragmatically Yours (Mary Keeler) - Conceptual Modeling for Distributed Ontology Environments (Deborah L. McGuinness) - Discovery of Class Relations in Exception Structured Knowledge Bases (Hendra Suryanto, Paul Compton) - Conceptual Graphs: Perspectives: CGs Applications: Where Are We 7 Years after the First ICCS ? (Michel Chein, David Genest) - The Engineering of a CC-Based System: Fundamental Issues (Guy W. Mineau) - Conceptual Graphs, Metamodeling, and Notation of Concepts (Olivier Gerbé, Guy W. Mineau, Rudolf K. Keller) - Knowledge Representation and Reasonings: Based on Graph Homomorphism (Marie-Laure Mugnier) - User Modeling Using Conceptual Graphs for Intelligent Agents (James F. Baldwin, Trevor P. Martin, Aimilia Tzanavari) - Towards a Unified Querying System of Both Structured and Semi-structured Imprecise Data Using Fuzzy View (Patrice Buche, Ollivier Haemmerlé) - Formal Semantics of Conceptual Structures: The Extensional Semantics of the Conceptual Graph Formalism (Guy W. Mineau) - Semantics of Attribute Relations in Conceptual Graphs (Pavel Kocura) - Nested Concept Graphs and Triadic Power Context Families (Susanne Prediger) - Negations in Simple Concept Graphs (Frithjof Dau) - Extending the CG Model by Simulations (Jean-François Baget) - Contextual Logic and Formal Concept Analysis: Building and Structuring Description Logic Knowledge Bases: Using Least Common Subsumers and Concept Analysis (Franz Baader, Ralf Molitor) - On the Contextual Logic of Ordinal Data (Silke Pollandt, Rudolf Wille) - Boolean Concept Logic (Rudolf Wille) - Lattices of Triadic Concept Graphs (Bernd Groh, Rudolf Wille) - Formalizing Hypotheses with Concepts (Bernhard Ganter, Sergei 0. Kuznetsov) - Generalized Formal Concept Analysis (Laurent Chaudron, Nicolas Maille) - A Logical Generalization of Formal Concept Analysis (Sébastien Ferré, Olivier Ridoux) - On the Treatment of Incomplete Knowledge in Formal Concept Analysis (Peter Burmeister, Richard Holzer) - Conceptual Structures in Practice: Logic-Based Networks: Concept Graphs and Conceptual Structures (Peter W. Eklund) - Conceptual Knowledge Discovery and Data Analysis (Joachim Hereth, Gerd Stumme, Rudolf Wille, Uta Wille) - CEM - A Conceptual Email Manager (Richard Cole, Gerd Stumme) - A Contextual-Logic Extension of TOSCANA (Peter Eklund, Bernd Groh, Gerd Stumme, Rudolf Wille) - A Conceptual Graph Model for W3C Resource Description Framework (Olivier Corby, Rose Dieng, Cédric Hébert) - Computational Aspects of Conceptual Structures: Computing with Conceptual Structures (Bernhard Ganter) - Symmetry and the Computation of Conceptual Structures (Robert Levinson) An Introduction to SNePS 3 (Stuart C. Shapiro) - Composition Norm Dynamics Calculation with Conceptual Graphs (Aldo de Moor) - From PROLOG++ to PROLOG+CG: A CG Object-Oriented Logic Programming Language (Adil Kabbaj, Martin Janta-Polczynski) - A Cost-Bounded Algorithm to Control Events Generalization (Gaël de Chalendar, Brigitte Grau, Olivier Ferret)
    Pages
    XI,568 S
    Series
    Lecture notes in computer science; vol.1867: Lecture notes on artificial intelligence
    Type
    s
  17. Fellbaum, C.: On the semantics of troponymy (2002) 0.00
    6.355139E-4 = product of:
      0.009532709 = sum of:
        0.00756075 = weight(_text_:in in 1191) [ClassicSimilarity], result of:
          0.00756075 = score(doc=1191,freq=12.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2576908 = fieldWeight in 1191, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1191)
        0.0019719584 = weight(_text_:s in 1191) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=1191,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 1191, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1191)
      0.06666667 = coord(2/30)
    
    Abstract
    The principal relation linking verbs in a semantic network is the manner relation (or "troponymy"). We examine the nature of troponymy across different semantic domains and verb classes in an attempt to arrive at a more subtle understanding of this intuitive relation. Troponymy is not a semantically homogeneous relation; rather, it is polysemous and encompasses distinct sub-relations. We identify and discuss Manner, Function, and Result. Furthermore, different kinds of troponyms differ from their semantically less elaborated superordinates in their syntactic behavior. In some cases, troponyms exhibit a wider range of syntactic altemations; in other cases, the troponyms are more restricted in their argument-projecting properties.
    Pages
    S.23-34
  18. ¬The semantics of relationships : an interdisciplinary perspective (2002) 0.00
    6.0359633E-4 = product of:
      0.009053945 = sum of:
        0.006614278 = weight(_text_:in in 1430) [ClassicSimilarity], result of:
          0.006614278 = score(doc=1430,freq=18.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.22543246 = fieldWeight in 1430, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1430)
        0.0024396663 = weight(_text_:s in 1430) [ClassicSimilarity], result of:
          0.0024396663 = score(doc=1430,freq=6.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.10403037 = fieldWeight in 1430, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1430)
      0.06666667 = coord(2/30)
    
    Abstract
    Work on relationships takes place in many communities, including, among others, data modeling, knowledge representation, natural language processing, linguistics, and information retrieval. Unfortunately, continued disciplinary splintering and specialization keeps any one person from being familiar with the full expanse of that work. By including contributions form experts in a variety of disciplines and backgrounds, this volume demonstrates both the parallels that inform work on relationships across a number of fields and the singular emphases that have yet to be fully embraced, The volume is organized into 3 parts: (1) Types of relationships (2) Relationships in knowledge representation and reasoning (3) Applications of relationships
    Content
    Enthält die Beiträge: Pt.1: Types of relationships: CRUDE, D.A.: Hyponymy and its varieties; FELLBAUM, C.: On the semantics of troponymy; PRIBBENOW, S.: Meronymic relationships: from classical mereology to complex part-whole relations; KHOO, C. u.a.: The many facets of cause-effect relation - Pt.2: Relationships in knowledge representation and reasoning: GREEN, R.: Internally-structured conceptual models in cognitive semantics; HOVY, E.: Comparing sets of semantic relations in ontologies; GUARINO, N., C. WELTY: Identity and subsumption; JOUIS; C.: Logic of relationships - Pt.3: Applications of relationships: EVENS, M.: Thesaural relations in information retrieval; KHOO, C., S.H. MYAENG: Identifying semantic relations in text for information retrieval and information extraction; McCRAY, A.T., O. BODENREICHER: A conceptual framework for the biiomedical domain; HETZLER, B.: Visual analysis and exploration of relationships
    Footnote
    Mit ausführlicher Einleitung der Herausgeber zu den Themen: Types of relationships - Relationships in knowledge representation and reasoning - Applications of relationships
    Pages
    XVIII,223 S
    Type
    s
  19. Khoo, C.; Myaeng, S.H.: Identifying semantic relations in text for information retrieval and information extraction (2002) 0.00
    5.79343E-4 = product of:
      0.008690145 = sum of:
        0.0069998945 = weight(_text_:in in 1197) [ClassicSimilarity], result of:
          0.0069998945 = score(doc=1197,freq=14.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.23857531 = fieldWeight in 1197, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1197)
        0.0016902501 = weight(_text_:s in 1197) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=1197,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 1197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=1197)
      0.06666667 = coord(2/30)
    
    Abstract
    Automatic identification of semantic relations in text is a difficult problem, but is important for many applications. It has been used for relation matching in information retrieval to retrieve documents that contain not only the concepts but also the relations between concepts specified in the user's query. It is an integral part of information extraction-extracting from natural language text, facts or pieces of information related to a particular event or topic. Other potential applications are in the construction of relational thesauri (semantic networks of related concepts) and other kinds of knowledge bases, and in natural language processing applications such as machine translation and computer comprehension of text. This chapter examines the main methods used for identifying semantic relations automatically and their application in information retrieval and information extraction.
    Pages
    S.161-180
  20. Khoo, C.; Chan, S.; Niu, Y.: ¬The many facets of the cause-effect relation (2002) 0.00
    5.537577E-4 = product of:
      0.008306365 = sum of:
        0.005915991 = weight(_text_:in in 1192) [ClassicSimilarity], result of:
          0.005915991 = score(doc=1192,freq=10.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.20163295 = fieldWeight in 1192, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1192)
        0.002390375 = weight(_text_:s in 1192) [ClassicSimilarity], result of:
          0.002390375 = score(doc=1192,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.101928525 = fieldWeight in 1192, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=1192)
      0.06666667 = coord(2/30)
    
    Abstract
    This chapter presents a broad survey of the cause-effect relation, with particular emphasis an how the relation is expressed in text. Philosophers have been grappling with the concept of causation for centuries. Researchers in social psychology have found that the human mind has a very complex mechanism for identifying and attributing the cause for an event. Inferring cause-effect relations between events and statements has also been found to be an important part of reading and text comprehension, especially for narrative text. Though many of the cause-effect relations in text are implied and have to be inferred by the reader, there is also a wide variety of linguistic expressions for explicitly indicating cause and effect. In addition, it has been found that certain words have "causal valence"-they bias the reader to attribute cause in certain ways. Cause-effect relations can also be divided into several different types.
    Pages
    S.51-70