Search (47 results, page 1 of 3)

  • × type_ss:"x"
  1. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.07
    0.06741698 = product of:
      0.23595941 = sum of:
        0.20916098 = weight(_text_:brain in 3406) [ClassicSimilarity], result of:
          0.20916098 = score(doc=3406,freq=2.0), product of:
            0.2736591 = queryWeight, product of:
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.0395589 = queryNorm
            0.76431215 = fieldWeight in 3406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.078125 = fieldNorm(doc=3406)
        0.026798425 = product of:
          0.05359685 = sum of:
            0.05359685 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.05359685 = score(doc=3406,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    30. 5.2010 16:22:35
    Object
    Brain
  2. Eichmann, S.: Global books in print (GBIP) : Analyse und Bewertung (1995) 0.02
    0.024998559 = product of:
      0.17498991 = sum of:
        0.17498991 = weight(_text_:global in 6071) [ClassicSimilarity], result of:
          0.17498991 = score(doc=6071,freq=2.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.88429445 = fieldWeight in 6071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.125 = fieldNorm(doc=6071)
      0.14285715 = coord(1/7)
    
  3. Schröther, C.: Automatische Indexierung, Kategorisierung und inhaltliche Erschließung von Textnachrichten (2003) 0.02
    0.021506732 = product of:
      0.15054712 = sum of:
        0.15054712 = product of:
          0.30109423 = sum of:
            0.30109423 = weight(_text_:prof in 521) [ClassicSimilarity], result of:
              0.30109423 = score(doc=521,freq=2.0), product of:
                0.27749604 = queryWeight, product of:
                  7.014756 = idf(docFreq=107, maxDocs=44218)
                  0.0395589 = queryNorm
                1.0850397 = fieldWeight in 521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.014756 = idf(docFreq=107, maxDocs=44218)
                  0.109375 = fieldNorm(doc=521)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Footnote
    Magisterarbeit, Phil. Fak. der Rheinischen Friedrich-Wilhelms-Universität Bonn (Prof. Dr. W. Lenders)
  4. Sick, D.: Aufbau und Pflege komplexer natürlichsprachig basierter Dokumentationssprachen (Thesauri) : Aktuelle Tendenzen und kritische Analyse einer ausgewählten autonomen Thesaurus-Software für Personal Computer (PC) (1989) 0.02
    0.019053057 = product of:
      0.1333714 = sum of:
        0.1333714 = weight(_text_:personal in 206) [ClassicSimilarity], result of:
          0.1333714 = score(doc=206,freq=2.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.66857845 = fieldWeight in 206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.09375 = fieldNorm(doc=206)
      0.14285715 = coord(1/7)
    
  5. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.02
    0.017951434 = product of:
      0.12566003 = sum of:
        0.12566003 = product of:
          0.3769801 = sum of:
            0.3769801 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.3769801 = score(doc=973,freq=2.0), product of:
                0.3353808 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0395589 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  6. Witschel, H.F.: Global and local resources for peer-to-peer text retrieval (2008) 0.01
    0.013394875 = product of:
      0.09376413 = sum of:
        0.09376413 = weight(_text_:global in 127) [ClassicSimilarity], result of:
          0.09376413 = score(doc=127,freq=12.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.47382787 = fieldWeight in 127, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.02734375 = fieldNorm(doc=127)
      0.14285715 = coord(1/7)
    
    Abstract
    Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. What is examined, is the question of how we can obtain such global statistics and to what extent their use will lead to a drop in retrieval effectiveness. In chapter 6, the second research question is tackled, namely that of making forwarding decisions for queries, based on profiles of other peers. After a review of related work in that area, the chapter first defines the approaches that will be compared against each other. Then, a novel evaluation framework is introduced, including a new measure for comparing results of a distributed search engine against those of a centralised one. Finally, the actual evaluation is performed using the new framework.
  7. Walther, R.: Möglichkeiten und Grenzen automatischer Klassifikationen von Web-Dokumenten (2001) 0.01
    0.010753366 = product of:
      0.07527356 = sum of:
        0.07527356 = product of:
          0.15054712 = sum of:
            0.15054712 = weight(_text_:prof in 1562) [ClassicSimilarity], result of:
              0.15054712 = score(doc=1562,freq=2.0), product of:
                0.27749604 = queryWeight, product of:
                  7.014756 = idf(docFreq=107, maxDocs=44218)
                  0.0395589 = queryNorm
                0.54251987 = fieldWeight in 1562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.014756 = idf(docFreq=107, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1562)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Footnote
    Lizenziatsarbeit an der Rechts- und Wirtschaftswissenschaftlichen Fakultät der Universität Bern, Institut für Wirtschaftsinformatik (Prof. G. Knolmayer)
  8. Frei, R.: Informationswissenschaftliche Begriffe und Kernprozesse aus Sicht des Radikalen Konstruktivismus (2009) 0.01
    0.00921717 = product of:
      0.06452019 = sum of:
        0.06452019 = product of:
          0.12904038 = sum of:
            0.12904038 = weight(_text_:prof in 3268) [ClassicSimilarity], result of:
              0.12904038 = score(doc=3268,freq=2.0), product of:
                0.27749604 = queryWeight, product of:
                  7.014756 = idf(docFreq=107, maxDocs=44218)
                  0.0395589 = queryNorm
                0.46501702 = fieldWeight in 3268, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.014756 = idf(docFreq=107, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3268)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Content
    Diese Publikation entstand im Rahmen einer Diplomarbeit zum Abschluss als dipl. Informations- und Dokumentationsspezialist FH. Referent: Prof. Dr. Norbert Lang, Korreferent: Dr. Rafael Ball. Vgl. unter: http://www.fh-htwchur.ch/uploads/media/CSI_34_Frei.pdf.
  9. Parsian, D.: Überlegungen zur Aufstellungssystematik und Reklassifikation an der Fachbereichsbibliothek Afrikawissenschaften und Orientalistik (2007) 0.01
    0.007938774 = product of:
      0.055571415 = sum of:
        0.055571415 = weight(_text_:personal in 3396) [ClassicSimilarity], result of:
          0.055571415 = score(doc=3396,freq=2.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.27857435 = fieldWeight in 3396, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3396)
      0.14285715 = coord(1/7)
    
    Abstract
    Der praktische Einsatz der Dewey-Dezimalklassifikation (DDC) für die inhaltliche Erschließung sowie als Aufstellungssystematik in wissenschaftlichen Bibliotheken des deutschen Sprachraums hat wenig Tradition und wurde bisher von der Literatur kaum aufgearbeitet. Nach einer Darstellung der Rahmenbedingungen und der Problemlage in der Fachbereichsbibliothek Afrikanistik/Orientalistik der Universität Wien, gibt der Autor einen Überblick über die Erfahrungen mit und die Einschätzung von DDC in vergleichbaren wissenschaftlichen Bibliotheken vor allem im deutschen und englischen Sprachraum, definiert Kriterien für eine neue Systematik und klärt inwieweit diese mit dem Einsatz von DDC erfüllbar sind. Ausgehend von den quantitativen und räumlichen Rahmenbedingungen und der Segmentierung des Bestandes im Hinblick auf die Erfordernisse der Reklassifikation, sowie auf der Basis eigener Erfahrungen und Plausibilitätsprüfungen schätzt der Autor anhand von drei Varianten den nötigen Personal- und Zeitaufwand für den Einsatz von DDC im Rahmen eines Reklassifizierungsprojektes. Abschließend enthält die vorliegende Arbeit praktische Erfahrungen im Umgang mit der DDC am Beispiel des Themenbereiches "Islamwissenschaft", durch die auf einige Besonderheiten und Probleme bei der Verwendung von DDC für die Reklassifizierung hingewiesen wird.
  10. Nagelschmidt, M.: Integration und Anwendung von "Semantic Web"-Technologien im betrieblichen Wissensmanagement (2012) 0.01
    0.007938774 = product of:
      0.055571415 = sum of:
        0.055571415 = weight(_text_:personal in 11) [ClassicSimilarity], result of:
          0.055571415 = score(doc=11,freq=2.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.27857435 = fieldWeight in 11, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0390625 = fieldNorm(doc=11)
      0.14285715 = coord(1/7)
    
    Abstract
    Das Wissensmanagement ist ein Themenkomplex mit zahlreichen fachlichen Bezügen, insbesondere zur Wirtschaftsinformatik und der Management-, Personal- und Organisationslehre als Teilbereiche der Betriebswirtschaftslehre. In einem weiter gefassten Verständnis bestehen aber auch Bezüge zur Organisationspsychologie, zur Informatik und zur Informationswissenschaft. Von den Entwicklungen in diesen Bezugsdisziplinen können deshalb auch Impulse für die Konzepte, Methodiken und Technologien des Wissensmanagements ausgehen. Die aus der Informatik stammende Idee, das World Wide Web (WWW) zu einem semantischen Netz auszubauen, kann als eine solche impulsgebende Entwicklung gesehen werden. Im Verlauf der vergangenen Dekade hat diese Idee einen hinreichenden Reifegrad erreicht, so dass eine potenzielle Relevanz auch für das Wissensmanagement unterstellt werden darf. Im Rahmen dieser Arbeit soll anhand eines konkreten, konzeptionellen Ansatzes demonstriert werden, wie dieser technologische Impuls für das Wissensmanagement nutzenbringend kanalisiert werden kann. Ein derartiges Erkenntnisinteresse erfordert zunächst die Erarbeitung eines operationalen Verständnisses von Wissensmanagement, auf dem die weiteren Betrachtungen aufbauen können. Es werden außerdem die Architektur und die Funktionsweise eines "Semantic Web" sowie XML und die Ontologiesprachen RDF/RDFS und OWL als maßgebliche Werkzeuge für eine ontologiebasierte Wissensrepräsentation eingeführt. Anschließend wird zur Integration und Anwendung dieser semantischen Technologien in das Wissensmanagement ein Ansatz vorgestellt, der eine weitgehend automatisierte Wissensmodellierung und daran anschließende, semantische Informationserschließung der betrieblichen Datenbasis beschreibt. Zur Veranschaulichung wird dazu auf eine fiktive Beispielwelt aus der Fertigungsindustrie zurückgegriffen. Schließlich soll der Nutzen dieser Vorgehensweise durch Anwendungsszenarien des Information Retrieval (IR) im Kontext von Geschäftsprozessen illustriert werden.
  11. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.01
    0.007479765 = product of:
      0.05235835 = sum of:
        0.05235835 = product of:
          0.15707505 = sum of:
            0.15707505 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.15707505 = score(doc=4997,freq=2.0), product of:
                0.3353808 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0395589 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  12. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.01
    0.007479765 = product of:
      0.05235835 = sum of:
        0.05235835 = product of:
          0.15707505 = sum of:
            0.15707505 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.15707505 = score(doc=4388,freq=2.0), product of:
                0.3353808 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0395589 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  13. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.01
    0.007479765 = product of:
      0.05235835 = sum of:
        0.05235835 = product of:
          0.15707505 = sum of:
            0.15707505 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.15707505 = score(doc=855,freq=2.0), product of:
                0.3353808 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0395589 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  14. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.01
    0.007479765 = product of:
      0.05235835 = sum of:
        0.05235835 = product of:
          0.15707505 = sum of:
            0.15707505 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.15707505 = score(doc=1000,freq=2.0), product of:
                0.3353808 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0395589 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  15. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.01
    0.0062496397 = product of:
      0.043747477 = sum of:
        0.043747477 = weight(_text_:global in 4746) [ClassicSimilarity], result of:
          0.043747477 = score(doc=4746,freq=2.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.22107361 = fieldWeight in 4746, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.03125 = fieldNorm(doc=4746)
      0.14285715 = coord(1/7)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  16. Heinz, A.: ¬Diie Lösung des Leib-Seele-Problems bei John R. Searle (2002) 0.01
    0.00614478 = product of:
      0.04301346 = sum of:
        0.04301346 = product of:
          0.08602692 = sum of:
            0.08602692 = weight(_text_:prof in 4299) [ClassicSimilarity], result of:
              0.08602692 = score(doc=4299,freq=2.0), product of:
                0.27749604 = queryWeight, product of:
                  7.014756 = idf(docFreq=107, maxDocs=44218)
                  0.0395589 = queryNorm
                0.31001136 = fieldWeight in 4299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.014756 = idf(docFreq=107, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4299)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Content
    Eine Hausarbeit JOHANNES GUTENBERG-UNIVERSITÄT FACHBEREICH KATHOLISCHE THEOLOGIE SEMINAR FÜR FUNDAMENTALTHEOLOGIE UND RELIGIONSWISSENSCHAFT, Hauptseminar: "Das Leib-Seele -Problem" (SS 2002), Leitung: Prof. Dr. Arnim Kreiner
  17. Stünkel, M.: Neuere Methoden der inhaltlichen Erschließung schöner Literatur in öffentlichen Bibliotheken (1986) 0.01
    0.006125354 = product of:
      0.042877477 = sum of:
        0.042877477 = product of:
          0.08575495 = sum of:
            0.08575495 = weight(_text_:22 in 5815) [ClassicSimilarity], result of:
              0.08575495 = score(doc=5815,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.61904186 = fieldWeight in 5815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5815)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    4. 8.2006 21:35:22
  18. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.01
    0.0059838123 = product of:
      0.041886684 = sum of:
        0.041886684 = product of:
          0.12566005 = sum of:
            0.12566005 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12566005 = score(doc=701,freq=2.0), product of:
                0.3353808 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0395589 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  19. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.01
    0.0059838123 = product of:
      0.041886684 = sum of:
        0.041886684 = product of:
          0.12566005 = sum of:
            0.12566005 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.12566005 = score(doc=5820,freq=2.0), product of:
                0.3353808 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0395589 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  20. Onofri, A.: Concepts in context (2013) 0.01
    0.0054684347 = product of:
      0.03827904 = sum of:
        0.03827904 = weight(_text_:global in 1077) [ClassicSimilarity], result of:
          0.03827904 = score(doc=1077,freq=2.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.19343941 = fieldWeight in 1077, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1077)
      0.14285715 = coord(1/7)
    
    Abstract
    My thesis discusses two related problems that have taken center stage in the recent literature on concepts: 1) What are the individuation conditions of concepts? Under what conditions is a concept Cv(1) the same concept as a concept Cv(2)? 2) What are the possession conditions of concepts? What conditions must be satisfied for a thinker to have a concept C? The thesis defends a novel account of concepts, which I call "pluralist-contextualist": 1) Pluralism: Different concepts have different kinds of individuation and possession conditions: some concepts are individuated more "coarsely", have less demanding possession conditions and are widely shared, while other concepts are individuated more "finely" and not shared. 2) Contextualism: When a speaker ascribes a propositional attitude to a subject S, or uses his ascription to explain/predict S's behavior, the speaker's intentions in the relevant context determine the correct individuation conditions for the concepts involved in his report. In chapters 1-3 I defend a contextualist, non-Millian theory of propositional attitude ascriptions. Then, I show how contextualism can be used to offer a novel perspective on the problem of concept individuation/possession. More specifically, I employ contextualism to provide a new, more effective argument for Fodor's "publicity principle": if contextualism is true, then certain specific concepts must be shared in order for interpersonally applicable psychological generalizations to be possible. In chapters 4-5 I raise a tension between publicity and another widely endorsed principle, the "Fregean constraint" (FC): subjects who are unaware of certain identity facts and find themselves in so-called "Frege cases" must have distinct concepts for the relevant object x. For instance: the ancient astronomers had distinct concepts (HESPERUS/PHOSPHORUS) for the same object (the planet Venus). First, I examine some leading theories of concepts and argue that they cannot meet both of our constraints at the same time. Then, I offer principled reasons to think that no theory can satisfy (FC) while also respecting publicity. (FC) appears to require a form of holism, on which a concept is individuated by its global inferential role in a subject S and can thus only be shared by someone who has exactly the same inferential dispositions as S. This explains the tension between publicity and (FC), since holism is clearly incompatible with concept shareability. To solve the tension, I suggest adopting my pluralist-contextualist proposal: concepts involved in Frege cases are holistically individuated and not public, while other concepts are more coarsely individuated and widely shared; given this "plurality" of concepts, we will then need contextual factors (speakers' intentions) to "select" the specific concepts to be employed in our intentional generalizations in the relevant contexts. In chapter 6 I develop the view further by contrasting it with some rival accounts. First, I examine a very different kind of pluralism about concepts, which has been recently defended by Daniel Weiskopf, and argue that it is insufficiently radical. Then, I consider the inferentialist accounts defended by authors like Peacocke, Rey and Jackson. Such views, I argue, are committed to an implausible picture of reference determination, on which our inferential dispositions fix the reference of our concepts: this leads to wrong predictions in all those cases of scientific disagreement where two parties have very different inferential dispositions and yet seem to refer to the same natural kind.

Languages

  • d 36
  • e 10
  • hu 1
  • More… Less…