Search (41 results, page 1 of 3)

  • × type_ss:"x"
  • × language_ss:"e"
  1. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.05
    0.04655082 = sum of:
      0.045551956 = product of:
        0.18220782 = sum of:
          0.18220782 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
            0.18220782 = score(doc=4997,freq=2.0), product of:
              0.38904333 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045888513 = queryNorm
              0.46834838 = fieldWeight in 4997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4997)
        0.25 = coord(1/4)
      9.988648E-4 = product of:
        0.0029965942 = sum of:
          0.0029965942 = weight(_text_:s in 4997) [ClassicSimilarity], result of:
            0.0029965942 = score(doc=4997,freq=2.0), product of:
              0.049891718 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.045888513 = queryNorm
              0.060061958 = fieldWeight in 4997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4997)
        0.33333334 = coord(1/3)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
    Pages
    IVX, 140 S
  2. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.04
    0.037240654 = sum of:
      0.03644156 = product of:
        0.14576624 = sum of:
          0.14576624 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
            0.14576624 = score(doc=701,freq=2.0), product of:
              0.38904333 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045888513 = queryNorm
              0.3746787 = fieldWeight in 701, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.25 = coord(1/4)
      7.9909177E-4 = product of:
        0.0023972753 = sum of:
          0.0023972753 = weight(_text_:s in 701) [ClassicSimilarity], result of:
            0.0023972753 = score(doc=701,freq=2.0), product of:
              0.049891718 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.045888513 = queryNorm
              0.048049565 = fieldWeight in 701, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
    Pages
    249 S
  3. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.04
    0.037240654 = sum of:
      0.03644156 = product of:
        0.14576624 = sum of:
          0.14576624 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
            0.14576624 = score(doc=5820,freq=2.0), product of:
              0.38904333 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045888513 = queryNorm
              0.3746787 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.25 = coord(1/4)
      7.9909177E-4 = product of:
        0.0023972753 = sum of:
          0.0023972753 = weight(_text_:s in 5820) [ClassicSimilarity], result of:
            0.0023972753 = score(doc=5820,freq=2.0), product of:
              0.049891718 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.045888513 = queryNorm
              0.048049565 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.33333334 = coord(1/3)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
    Pages
    iii, 82 S
  4. Gordon, T.J.; Helmer-Hirschberg, O.: Report on a long-range forecasting study (1964) 0.03
    0.02504494 = product of:
      0.05008988 = sum of:
        0.05008988 = product of:
          0.07513482 = sum of:
            0.0047945506 = weight(_text_:s in 4204) [ClassicSimilarity], result of:
              0.0047945506 = score(doc=4204,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.09609913 = fieldWeight in 4204, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4204)
            0.07034027 = weight(_text_:22 in 4204) [ClassicSimilarity], result of:
              0.07034027 = score(doc=4204,freq=4.0), product of:
                0.16069375 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045888513 = queryNorm
                0.4377287 = fieldWeight in 4204, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4204)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 6.2018 13:24:08
    22. 6.2018 13:54:52
    Pages
    xi, 65 S
  5. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.01
    0.013633157 = product of:
      0.027266314 = sum of:
        0.027266314 = product of:
          0.04089947 = sum of:
            0.0035959128 = weight(_text_:s in 563) [ClassicSimilarity], result of:
              0.0035959128 = score(doc=563,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.072074346 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
            0.03730356 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.03730356 = score(doc=563,freq=2.0), product of:
                0.16069375 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045888513 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    10. 1.2013 19:22:47
    Pages
    vii, 104 S
  6. Geisriegler, E.: Enriching electronic texts with semantic metadata : a use case for the historical Newspaper Collection ANNO (Austrian Newspapers Online) of the Austrian National Libraryhek (2012) 0.01
    0.011360965 = product of:
      0.02272193 = sum of:
        0.02272193 = product of:
          0.034082893 = sum of:
            0.0029965942 = weight(_text_:s in 595) [ClassicSimilarity], result of:
              0.0029965942 = score(doc=595,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.060061958 = fieldWeight in 595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=595)
            0.0310863 = weight(_text_:22 in 595) [ClassicSimilarity], result of:
              0.0310863 = score(doc=595,freq=2.0), product of:
                0.16069375 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045888513 = queryNorm
                0.19345059 = fieldWeight in 595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=595)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    3. 2.2013 18:00:22
    Pages
    345 S
  7. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.01
    0.009088771 = product of:
      0.018177543 = sum of:
        0.018177543 = product of:
          0.027266314 = sum of:
            0.0023972753 = weight(_text_:s in 4399) [ClassicSimilarity], result of:
              0.0023972753 = score(doc=4399,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.048049565 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
            0.02486904 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
              0.02486904 = score(doc=4399,freq=2.0), product of:
                0.16069375 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045888513 = queryNorm
                0.15476047 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
    Pages
    ix, 74 S
  8. Makewita, S.M.: Investigating the generic information-seeking function of organisational decision-makers : perspectives on improving organisational information systems (2002) 0.01
    0.00518105 = product of:
      0.0103621 = sum of:
        0.0103621 = product of:
          0.0310863 = sum of:
            0.0310863 = weight(_text_:22 in 642) [ClassicSimilarity], result of:
              0.0310863 = score(doc=642,freq=2.0), product of:
                0.16069375 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045888513 = queryNorm
                0.19345059 = fieldWeight in 642, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=642)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 7.2022 12:16:58
  9. Tavakolizadeh-Ravari, M.: Analysis of the long term dynamics in thesaurus developments and its consequences (2017) 0.00
    0.004249825 = product of:
      0.00849965 = sum of:
        0.00849965 = product of:
          0.012749475 = sum of:
            0.0103522 = weight(_text_:d in 3081) [ClassicSimilarity], result of:
              0.0103522 = score(doc=3081,freq=4.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.118742 = fieldWeight in 3081, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3081)
            0.0023972753 = weight(_text_:s in 3081) [ClassicSimilarity], result of:
              0.0023972753 = score(doc=3081,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.048049565 = fieldWeight in 3081, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3081)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Die Arbeit analysiert die dynamische Entwicklung und den Gebrauch von Thesaurusbegriffen. Zusätzlich konzentriert sie sich auf die Faktoren, die die Zahl von Indexbegriffen pro Dokument oder Zeitschrift beeinflussen. Als Untersuchungsobjekt dienten der MeSH und die entsprechende Datenbank "MEDLINE". Die wichtigsten Konsequenzen sind: 1. Der MeSH-Thesaurus hat sich durch drei unterschiedliche Phasen jeweils logarithmisch entwickelt. Solch einen Thesaurus sollte folgenden Gleichung folgen: "T = 3.076,6 Ln (d) - 22.695 + 0,0039d" (T = Begriffe, Ln = natürlicher Logarithmus und d = Dokumente). Um solch einen Thesaurus zu konstruieren, muss man demnach etwa 1.600 Dokumente von unterschiedlichen Themen des Bereiches des Thesaurus haben. Die dynamische Entwicklung von Thesauri wie MeSH erfordert die Einführung eines neuen Begriffs pro Indexierung von 256 neuen Dokumenten. 2. Die Verteilung der Thesaurusbegriffe erbrachte drei Kategorien: starke, normale und selten verwendete Headings. Die letzte Gruppe ist in einer Testphase, während in der ersten und zweiten Kategorie die neu hinzukommenden Deskriptoren zu einem Thesauruswachstum führen. 3. Es gibt ein logarithmisches Verhältnis zwischen der Zahl von Index-Begriffen pro Aufsatz und dessen Seitenzahl für die Artikeln zwischen einer und einundzwanzig Seiten. 4. Zeitschriftenaufsätze, die in MEDLINE mit Abstracts erscheinen erhalten fast zwei Deskriptoren mehr. 5. Die Findablity der nicht-englisch sprachigen Dokumente in MEDLINE ist geringer als die englische Dokumente. 6. Aufsätze der Zeitschriften mit einem Impact Factor 0 bis fünfzehn erhalten nicht mehr Indexbegriffe als die der anderen von MEDINE erfassten Zeitschriften. 7. In einem Indexierungssystem haben unterschiedliche Zeitschriften mehr oder weniger Gewicht in ihrem Findability. Die Verteilung der Indexbegriffe pro Seite hat gezeigt, dass es bei MEDLINE drei Kategorien der Publikationen gibt. Außerdem gibt es wenige stark bevorzugten Zeitschriften."
    Pages
    128 S
  10. Ziemba, L.: Information retrieval with concept discovery in digital collections for agriculture and natural resources (2011) 0.00
    0.0032391287 = product of:
      0.0064782575 = sum of:
        0.0064782575 = product of:
          0.009717386 = sum of:
            0.0073201107 = weight(_text_:d in 4728) [ClassicSimilarity], result of:
              0.0073201107 = score(doc=4728,freq=2.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.08396327 = fieldWeight in 4728, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4728)
            0.0023972753 = weight(_text_:s in 4728) [ClassicSimilarity], result of:
              0.0023972753 = score(doc=4728,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.048049565 = fieldWeight in 4728, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4728)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http://proquest.umi.com/pqdlink?did=2425185191&Fmt=7&clientI d=79356&RQT=309&VName=PQD.
    Pages
    172 S
  11. Oberhauser, O.: Card-Image Public Access Catalogues (CIPACs) : a critical consideration of a cost-effective alternative to full retrospective catalogue conversion (2002) 0.00
    0.0028342376 = product of:
      0.0056684753 = sum of:
        0.0056684753 = product of:
          0.0085027125 = sum of:
            0.0064050965 = weight(_text_:d in 1703) [ClassicSimilarity], result of:
              0.0064050965 = score(doc=1703,freq=2.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.07346786 = fieldWeight in 1703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1703)
            0.002097616 = weight(_text_:s in 1703) [ClassicSimilarity], result of:
              0.002097616 = score(doc=1703,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.04204337 = fieldWeight in 1703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1703)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: ABI-Technik 21(2002) H.3, S.292 (E. Pietzsch): "Otto C. Oberhauser hat mit seiner Diplomarbeit eine beeindruckende Analyse digitalisierter Zettelkataloge (CIPACs) vorgelegt. Die Arbeit wartet mit einer Fülle von Daten und Statistiken auf, wie sie bislang nicht vorgelegen haben. BibliothekarInnen, die sich mit der Digitalisierung von Katalogen tragen, finden darin eine einzigartige Vorlage zur Entscheidungsfindung. Nach einem einführenden Kapitel bringt Oberhauser zunächst einen Überblick über eine Auswahl weltweit verfügbarer CIPACs, deren Indexierungsmethode (Binäre Suche, partielle Indexierung, Suche in OCR-Daten) und stellt vergleichende Betrachtungen über geographische Verteilung, Größe, Software, Navigation und andere Eigenschaften an. Anschließend beschreibt und analysiert er Implementierungsprobleme, beginnend bei Gründen, die zur Digitalisierung führen können: Kosten, Umsetzungsdauer, Zugriffsverbesserung, Stellplatzersparnis. Er fährt fort mit technischen Aspekten wie Scannen und Qualitätskontrolle, Image Standards, OCR, manueller Nacharbeit, Servertechnologie. Dabei geht er auch auf die eher hinderlichen Eigenschaften älterer Kataloge ein sowie auf die Präsentation im Web und die Anbindung an vorhandene Opacs. Einem wichtigen Aspekt, nämlich der Beurteilung durch die wichtigste Zielgruppe, die BibliotheksbenutzerInnen, hat Oberhauser eine eigene Feldforschung gewidmet, deren Ergebnisse er im letzten Kapitel eingehend analysiert. Anhänge über die Art der Datenerhebung und Einzelbeschreibung vieler Kataloge runden die Arbeit ab. Insgesamt kann ich die Arbeit nur als die eindrucksvollste Sammlung von Daten, Statistiken und Analysen zum Thema CIPACs bezeichnen, die mir bislang begegnet ist. Auf einen schön herausgearbeiteten Einzelaspekt, nämlich die weitgehende Zersplitterung bei den eingesetzten Softwaresystemen, will ich besonders eingehen: Derzeit können wir grob zwischen Komplettlösungen (eine beauftragte Firma führt als Generalunternehmung sämtliche Aufgaben von der Digitalisierung bis zur Ablieferung der fertigen Anwendung aus) und geteilten Lösungen (die Digitalisierung wird getrennt von der Indexierung und der Softwareerstellung vergeben bzw. im eigenen Hause vorgenommen) unterscheiden. Letztere setzen ein Projektmanagement im Hause voraus. Gerade die Softwareerstellung im eigenen Haus aber kann zu Lösungen führen, die kommerziellen Angeboten keineswegs nachstehen. Schade ist nur, daß die vielfältigen Eigenentwicklungen bislang noch nicht zu Initiativen geführt haben, die, ähnlich wie bei Public Domain Software, eine "optimale", kostengünstige und weithin akzeptierte Softwarelösung zum Ziel haben. Einige kritische Anmerkungen sollen dennoch nicht unerwähnt bleiben. Beispielsweise fehlt eine Differenzierung zwischen "Reiterkarten"-Systemen, d.h. solchen mit Indexierung jeder 20. oder 50. Karte, und Systemen mit vollständiger Indexierung sämtlicher Kartenköpfe, führt doch diese weitreichende Designentscheidung zu erheblichen Kostenverschiebungen zwischen Katalogerstellung und späterer Benutzung. Auch bei den statistischen Auswertungen der Feldforschung hätte ich mir eine feinere Differenzierung nach Typ des CIPAC oder nach Bibliothek gewünscht. So haben beispielsweise mehr als die Hälfte der befragten BenutzerInnen angegeben, die Bedienung des CIPAC sei zunächst schwer verständlich oder seine Benutzung sei zeitaufwendig gewesen. Offen beibt jedoch, ob es Unterschiede zwischen den verschiedenen Realisierungstypen gibt.
    Source
    http://www.ub.tuwien.ac.at/cipacs/d-i.html
  12. Markó, K.G.: Foundation, implementation and evaluation of the MorphoSaurus system (2008) 0.00
    0.0028342376 = product of:
      0.0056684753 = sum of:
        0.0056684753 = product of:
          0.0085027125 = sum of:
            0.0064050965 = weight(_text_:d in 4415) [ClassicSimilarity], result of:
              0.0064050965 = score(doc=4415,freq=2.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.07346786 = fieldWeight in 4415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4415)
            0.002097616 = weight(_text_:s in 4415) [ClassicSimilarity], result of:
              0.002097616 = score(doc=4415,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.04204337 = fieldWeight in 4415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4415)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    Jena, Univ., Diss., 2008. Vgl.: urn:nbn:de:gbv:27-20090305-111708-4; auch unter: http://d-nb.info/993373798/about/html.
    Pages
    x, 202 S
  13. Nielsen, M.L.: ¬The word association method (2002) 0.00
    0.0013984106 = product of:
      0.0027968213 = sum of:
        0.0027968213 = product of:
          0.008390464 = sum of:
            0.008390464 = weight(_text_:s in 3811) [ClassicSimilarity], result of:
              0.008390464 = score(doc=3811,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.16817348 = fieldWeight in 3811, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3811)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Pages
    342 S
  14. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.00
    0.0010380507 = product of:
      0.0020761015 = sum of:
        0.0020761015 = product of:
          0.0062283045 = sum of:
            0.0062283045 = weight(_text_:s in 3829) [ClassicSimilarity], result of:
              0.0062283045 = score(doc=3829,freq=6.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.124836445 = fieldWeight in 3829, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Thesis submitted to the Graduate School of Natural and Applied Sciences of Middle East Technical University in partial fulfilment of the requirements for the degree of Master of science in Computer Engineering (XII, 57 S.)
    Source
    Information Systems. 37(2012) no. 4, S.294-305
  15. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2015) 0.00
    9.988648E-4 = product of:
      0.0019977295 = sum of:
        0.0019977295 = product of:
          0.0059931884 = sum of:
            0.0059931884 = weight(_text_:s in 1172) [ClassicSimilarity], result of:
              0.0059931884 = score(doc=1172,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.120123915 = fieldWeight in 1172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1172)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Pages
    VIII, 280 S
  16. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2013) 0.00
    9.988648E-4 = product of:
      0.0019977295 = sum of:
        0.0019977295 = product of:
          0.0059931884 = sum of:
            0.0059931884 = weight(_text_:s in 1810) [ClassicSimilarity], result of:
              0.0059931884 = score(doc=1810,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.120123915 = fieldWeight in 1810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1810)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Pages
    xxx S
  17. Seidlmayer, E.: ¬An ontology of digital objects in philosophy : an approach for practical use in research (2018) 0.00
    9.888257E-4 = product of:
      0.0019776514 = sum of:
        0.0019776514 = product of:
          0.005932954 = sum of:
            0.005932954 = weight(_text_:s in 5496) [ClassicSimilarity], result of:
              0.005932954 = score(doc=5496,freq=4.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.118916616 = fieldWeight in 5496, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5496)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Master thesis Library and Information Science, Fakultät für Informations- und Kommunikationswissenschaften, Technische Hochschule Köln. Schön auch: Bei Google Scholar unter 'Eva, S.' nachgewiesen.
    Pages
    64 S
  18. Garfield, E.: ¬An algorithm for translating chemical names to molecular formulas (1961) 0.00
    7.0630404E-4 = product of:
      0.0014126081 = sum of:
        0.0014126081 = product of:
          0.004237824 = sum of:
            0.004237824 = weight(_text_:s in 3465) [ClassicSimilarity], result of:
              0.004237824 = score(doc=3465,freq=4.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.08494043 = fieldWeight in 3465, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3465)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Doctoral dissertation, University of Pennsylvania, 1961. Vgl..: http://www.garfield.library.upenn.edu/essays/v7p441y1984.pdf. Auch in: Essays of an information scientist. Vol. 7. Philadelphia, PA: ISI Press, 1985. S.441-513.
    Pages
    73 S
  19. Kirk, J.: Theorising information use : managers and their work (2002) 0.00
    6.992053E-4 = product of:
      0.0013984106 = sum of:
        0.0013984106 = product of:
          0.004195232 = sum of:
            0.004195232 = weight(_text_:s in 560) [ClassicSimilarity], result of:
              0.004195232 = score(doc=560,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.08408674 = fieldWeight in 560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=560)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Pages
    XII, 345 S
  20. Onofri, A.: Concepts in context (2013) 0.00
    6.992053E-4 = product of:
      0.0013984106 = sum of:
        0.0013984106 = product of:
          0.004195232 = sum of:
            0.004195232 = weight(_text_:s in 1077) [ClassicSimilarity], result of:
              0.004195232 = score(doc=1077,freq=8.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.08408674 = fieldWeight in 1077, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1077)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    My thesis discusses two related problems that have taken center stage in the recent literature on concepts: 1) What are the individuation conditions of concepts? Under what conditions is a concept Cv(1) the same concept as a concept Cv(2)? 2) What are the possession conditions of concepts? What conditions must be satisfied for a thinker to have a concept C? The thesis defends a novel account of concepts, which I call "pluralist-contextualist": 1) Pluralism: Different concepts have different kinds of individuation and possession conditions: some concepts are individuated more "coarsely", have less demanding possession conditions and are widely shared, while other concepts are individuated more "finely" and not shared. 2) Contextualism: When a speaker ascribes a propositional attitude to a subject S, or uses his ascription to explain/predict S's behavior, the speaker's intentions in the relevant context determine the correct individuation conditions for the concepts involved in his report. In chapters 1-3 I defend a contextualist, non-Millian theory of propositional attitude ascriptions. Then, I show how contextualism can be used to offer a novel perspective on the problem of concept individuation/possession. More specifically, I employ contextualism to provide a new, more effective argument for Fodor's "publicity principle": if contextualism is true, then certain specific concepts must be shared in order for interpersonally applicable psychological generalizations to be possible. In chapters 4-5 I raise a tension between publicity and another widely endorsed principle, the "Fregean constraint" (FC): subjects who are unaware of certain identity facts and find themselves in so-called "Frege cases" must have distinct concepts for the relevant object x. For instance: the ancient astronomers had distinct concepts (HESPERUS/PHOSPHORUS) for the same object (the planet Venus). First, I examine some leading theories of concepts and argue that they cannot meet both of our constraints at the same time. Then, I offer principled reasons to think that no theory can satisfy (FC) while also respecting publicity. (FC) appears to require a form of holism, on which a concept is individuated by its global inferential role in a subject S and can thus only be shared by someone who has exactly the same inferential dispositions as S. This explains the tension between publicity and (FC), since holism is clearly incompatible with concept shareability. To solve the tension, I suggest adopting my pluralist-contextualist proposal: concepts involved in Frege cases are holistically individuated and not public, while other concepts are more coarsely individuated and widely shared; given this "plurality" of concepts, we will then need contextual factors (speakers' intentions) to "select" the specific concepts to be employed in our intentional generalizations in the relevant contexts. In chapter 6 I develop the view further by contrasting it with some rival accounts. First, I examine a very different kind of pluralism about concepts, which has been recently defended by Daniel Weiskopf, and argue that it is insufficiently radical. Then, I consider the inferentialist accounts defended by authors like Peacocke, Rey and Jackson. Such views, I argue, are committed to an implausible picture of reference determination, on which our inferential dispositions fix the reference of our concepts: this leads to wrong predictions in all those cases of scientific disagreement where two parties have very different inferential dispositions and yet seem to refer to the same natural kind.
    Pages
    243 S