Search (160 results, page 8 of 8)

  • × language_ss:"e"
  • × type_ss:"el"
  1. Baker, T.: ¬A grammar of Dublin Core (2000) 0.01
    0.006171255 = product of:
      0.01234251 = sum of:
        0.01234251 = product of:
          0.02468502 = sum of:
            0.02468502 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.02468502 = score(doc=1236,freq=2.0), product of:
                0.15950468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045548957 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.12.2011 14:01:22
  2. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.01
    0.006171255 = product of:
      0.01234251 = sum of:
        0.01234251 = product of:
          0.02468502 = sum of:
            0.02468502 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
              0.02468502 = score(doc=1163,freq=2.0), product of:
                0.15950468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045548957 = queryNorm
                0.15476047 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  3. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.01
    0.006171255 = product of:
      0.01234251 = sum of:
        0.01234251 = product of:
          0.02468502 = sum of:
            0.02468502 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
              0.02468502 = score(doc=3608,freq=2.0), product of:
                0.15950468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045548957 = queryNorm
                0.15476047 = fieldWeight in 3608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3608)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
  4. Heckner, M.; Mühlbacher, S.; Wolff, C.: Tagging tagging : a classification model for user keywords in scientific bibliography management systems (2007) 0.01
    0.0059878635 = product of:
      0.011975727 = sum of:
        0.011975727 = product of:
          0.023951454 = sum of:
            0.023951454 = weight(_text_:c in 533) [ClassicSimilarity], result of:
              0.023951454 = score(doc=533,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.1524436 = fieldWeight in 533, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.03125 = fieldNorm(doc=533)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. DeVorsey, K.L.; Elson, C.; Gregorev, N.P.; Hansen, J.: ¬The development of a local thesaurus to improve access to the anthropological collections of the American Museum of Natural History (2006) 0.01
    0.0059878635 = product of:
      0.011975727 = sum of:
        0.011975727 = product of:
          0.023951454 = sum of:
            0.023951454 = weight(_text_:c in 1174) [ClassicSimilarity], result of:
              0.023951454 = score(doc=1174,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.1524436 = fieldWeight in 1174, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1174)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Crane, G.: What do you do with a million books? (2006) 0.01
    0.0059878635 = product of:
      0.011975727 = sum of:
        0.011975727 = product of:
          0.023951454 = sum of:
            0.023951454 = weight(_text_:c in 1180) [ClassicSimilarity], result of:
              0.023951454 = score(doc=1180,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.1524436 = fieldWeight in 1180, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1180)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Greek historian Herodotus has the Athenian sage Solon estimate the lifetime of a human being at c. 26,250 days (Herodotus, The Histories, 1.32). If we could read a book on each of those days, it would take almost forty lifetimes to work through every volume in a single million book library. The continuous tradition of written European literature that began with the Iliad and Odyssey in the eighth century BCE is itself little more than a million days old. While libraries that contain more than one million items are not unusual, print libraries never possessed a million books of use to any one reader. The great libraries that took shape in the nineteenth and twentieth centuries were meta-structures, whose catalogues and finding aids allowed readers to create their own customized collections, building on the fixed classification schemes and disciplinary structures that took shape in the nineteenth century. The digital libraries of the early twenty-first century can be searched and their contents transmitted around the world. They can contain time-based media, images, quantitative data, and a far richer array of content than print, with visualization technologies blurring the boundaries between library and museum. But our digital libraries remain filled with digital incunabula - digital objects whose form remains firmly rooted in traditions of print, with HTML and PDF largely mimicking the limitations of their print predecessors. Vast collections based on image books - raw digital pictures of books with searchable but uncorrected text from OCR - could arguably retard our long-term progress, reinforcing the hegemony of structures that evolved to minimize the challenges of a world where paper was the only medium of distribution and where humans alone could read. Already the books in a digital library are beginning to read one another and to confer among themselves before creating a new synthetic document for review by their human readers.
  7. Birmingham, W.; Pardo, B.; Meek, C.; Shifrin, J.: ¬The MusArt music-retrieval system (2002) 0.01
    0.0059878635 = product of:
      0.011975727 = sum of:
        0.011975727 = product of:
          0.023951454 = sum of:
            0.023951454 = weight(_text_:c in 1205) [ClassicSimilarity], result of:
              0.023951454 = score(doc=1205,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.1524436 = fieldWeight in 1205, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1205)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Roszkowski, M.; Lukas, C.: ¬A distributed architecture for resource discovery using metadata (1998) 0.01
    0.0059878635 = product of:
      0.011975727 = sum of:
        0.011975727 = product of:
          0.023951454 = sum of:
            0.023951454 = weight(_text_:c in 1256) [ClassicSimilarity], result of:
              0.023951454 = score(doc=1256,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.1524436 = fieldWeight in 1256, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1256)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Lange, C.: Ontologies and languages for representing mathematical knowledge on the Semantic Web (2011) 0.01
    0.0059878635 = product of:
      0.011975727 = sum of:
        0.011975727 = product of:
          0.023951454 = sum of:
            0.023951454 = weight(_text_:c in 135) [ClassicSimilarity], result of:
              0.023951454 = score(doc=135,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.1524436 = fieldWeight in 135, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.03125 = fieldNorm(doc=135)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Waard, A. de; Fluit, C.; Harmelen, F. van: Drug Ontology Project for Elsevier (DOPE) (2007) 0.01
    0.0059878635 = product of:
      0.011975727 = sum of:
        0.011975727 = product of:
          0.023951454 = sum of:
            0.023951454 = weight(_text_:c in 758) [ClassicSimilarity], result of:
              0.023951454 = score(doc=758,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.1524436 = fieldWeight in 758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.03125 = fieldNorm(doc=758)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Oberhauser, O.: Card-Image Public Access Catalogues (CIPACs) : a critical consideration of a cost-effective alternative to full retrospective catalogue conversion (2002) 0.01
    0.00523938 = product of:
      0.01047876 = sum of:
        0.01047876 = product of:
          0.02095752 = sum of:
            0.02095752 = weight(_text_:c in 1703) [ClassicSimilarity], result of:
              0.02095752 = score(doc=1703,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.13338815 = fieldWeight in 1703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1703)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: ABI-Technik 21(2002) H.3, S.292 (E. Pietzsch): "Otto C. Oberhauser hat mit seiner Diplomarbeit eine beeindruckende Analyse digitalisierter Zettelkataloge (CIPACs) vorgelegt. Die Arbeit wartet mit einer Fülle von Daten und Statistiken auf, wie sie bislang nicht vorgelegen haben. BibliothekarInnen, die sich mit der Digitalisierung von Katalogen tragen, finden darin eine einzigartige Vorlage zur Entscheidungsfindung. Nach einem einführenden Kapitel bringt Oberhauser zunächst einen Überblick über eine Auswahl weltweit verfügbarer CIPACs, deren Indexierungsmethode (Binäre Suche, partielle Indexierung, Suche in OCR-Daten) und stellt vergleichende Betrachtungen über geographische Verteilung, Größe, Software, Navigation und andere Eigenschaften an. Anschließend beschreibt und analysiert er Implementierungsprobleme, beginnend bei Gründen, die zur Digitalisierung führen können: Kosten, Umsetzungsdauer, Zugriffsverbesserung, Stellplatzersparnis. Er fährt fort mit technischen Aspekten wie Scannen und Qualitätskontrolle, Image Standards, OCR, manueller Nacharbeit, Servertechnologie. Dabei geht er auch auf die eher hinderlichen Eigenschaften älterer Kataloge ein sowie auf die Präsentation im Web und die Anbindung an vorhandene Opacs. Einem wichtigen Aspekt, nämlich der Beurteilung durch die wichtigste Zielgruppe, die BibliotheksbenutzerInnen, hat Oberhauser eine eigene Feldforschung gewidmet, deren Ergebnisse er im letzten Kapitel eingehend analysiert. Anhänge über die Art der Datenerhebung und Einzelbeschreibung vieler Kataloge runden die Arbeit ab. Insgesamt kann ich die Arbeit nur als die eindrucksvollste Sammlung von Daten, Statistiken und Analysen zum Thema CIPACs bezeichnen, die mir bislang begegnet ist. Auf einen schön herausgearbeiteten Einzelaspekt, nämlich die weitgehende Zersplitterung bei den eingesetzten Softwaresystemen, will ich besonders eingehen: Derzeit können wir grob zwischen Komplettlösungen (eine beauftragte Firma führt als Generalunternehmung sämtliche Aufgaben von der Digitalisierung bis zur Ablieferung der fertigen Anwendung aus) und geteilten Lösungen (die Digitalisierung wird getrennt von der Indexierung und der Softwareerstellung vergeben bzw. im eigenen Hause vorgenommen) unterscheiden. Letztere setzen ein Projektmanagement im Hause voraus. Gerade die Softwareerstellung im eigenen Haus aber kann zu Lösungen führen, die kommerziellen Angeboten keineswegs nachstehen. Schade ist nur, daß die vielfältigen Eigenentwicklungen bislang noch nicht zu Initiativen geführt haben, die, ähnlich wie bei Public Domain Software, eine "optimale", kostengünstige und weithin akzeptierte Softwarelösung zum Ziel haben. Einige kritische Anmerkungen sollen dennoch nicht unerwähnt bleiben. Beispielsweise fehlt eine Differenzierung zwischen "Reiterkarten"-Systemen, d.h. solchen mit Indexierung jeder 20. oder 50. Karte, und Systemen mit vollständiger Indexierung sämtlicher Kartenköpfe, führt doch diese weitreichende Designentscheidung zu erheblichen Kostenverschiebungen zwischen Katalogerstellung und späterer Benutzung. Auch bei den statistischen Auswertungen der Feldforschung hätte ich mir eine feinere Differenzierung nach Typ des CIPAC oder nach Bibliothek gewünscht. So haben beispielsweise mehr als die Hälfte der befragten BenutzerInnen angegeben, die Bedienung des CIPAC sei zunächst schwer verständlich oder seine Benutzung sei zeitaufwendig gewesen. Offen beibt jedoch, ob es Unterschiede zwischen den verschiedenen Realisierungstypen gibt.
  12. Thomas, C.; McDonald, R.H.; McDowell, C.S.: Overview - Repositories by the numbers (2007) 0.01
    0.00523938 = product of:
      0.01047876 = sum of:
        0.01047876 = product of:
          0.02095752 = sum of:
            0.02095752 = weight(_text_:c in 1169) [ClassicSimilarity], result of:
              0.02095752 = score(doc=1169,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.13338815 = fieldWeight in 1169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1169)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Arms, W.Y.; Blanchi, C.; Overly, E.A.: ¬An architecture for information in digital libraries (1997) 0.01
    0.00523938 = product of:
      0.01047876 = sum of:
        0.01047876 = product of:
          0.02095752 = sum of:
            0.02095752 = weight(_text_:c in 1260) [ClassicSimilarity], result of:
              0.02095752 = score(doc=1260,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.13338815 = fieldWeight in 1260, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1260)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Daniel Jr., R.; Lagoze, C.: Extending the Warwick framework : from metadata containers to active digital objects (1997) 0.01
    0.00523938 = product of:
      0.01047876 = sum of:
        0.01047876 = product of:
          0.02095752 = sum of:
            0.02095752 = weight(_text_:c in 1264) [ClassicSimilarity], result of:
              0.02095752 = score(doc=1264,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.13338815 = fieldWeight in 1264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1264)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. ALA / Subcommittee on Subject Relationships/Reference Structures: Final Report to the ALCTS/CCS Subject Analysis Committee (1997) 0.01
    0.00523938 = product of:
      0.01047876 = sum of:
        0.01047876 = product of:
          0.02095752 = sum of:
            0.02095752 = weight(_text_:c in 1800) [ClassicSimilarity], result of:
              0.02095752 = score(doc=1800,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.13338815 = fieldWeight in 1800, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1800)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält: Appendix A: Subcommittee on Subject Relationships/Reference Structures - REPORT TO THE ALCTS/CCS SUBJECT ANALYSIS COMMITTEE - July 1996 Appendix B (part 1): Taxonomy of Subject Relationships. Compiled by Dee Michel with the assistance of Pat Kuhr - June 1996 draft (alphabetical display) (Separat in: http://web2.ala.org/ala/alctscontent/CCS/committees/subjectanalysis/subjectrelations/msrscu2.pdf) Appendix B (part 2): Taxonomy of Subject Relationships. Compiled by Dee Michel with the assistance of Pat Kuhr - June 1996 draft (hierarchical display) Appendix C: Checklist of Candidate Subject Relationships for Information Retrieval. Compiled by Dee Michel, Pat Kuhr, and Jane Greenberg; edited by Greg Wool - June 1997 Appendix D: Review of Reference Displays in Selected CD-ROM Abstracts and Indexes by Harriette Hemmasi and Steven Riel Appendix E: Analysis of Relationships in Six LC Subject Authority Records by Harriette Hemmasi and Gary Strawn Appendix F: Report of a Preliminary Survey of Subject Referencing in OPACs by Gregory Wool Appendix G: LC Subject Referencing in OPACs--Why Bother? by Gregory Wool Appendix H: Research Needs on Subject Relationships and Reference Structures in Information Access compiled by Jane Greenberg and Steven Riel with contributions from Dee Michel and others edited by Gregory Wool Appendix I: Bibliography on Subject Relationships compiled mostly by Dee Michel with additional contributions from Jane Greenberg, Steven Riel, and Gregory Wool
  16. Onofri, A.: Concepts in context (2013) 0.01
    0.00523938 = product of:
      0.01047876 = sum of:
        0.01047876 = product of:
          0.02095752 = sum of:
            0.02095752 = weight(_text_:c in 1077) [ClassicSimilarity], result of:
              0.02095752 = score(doc=1077,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.13338815 = fieldWeight in 1077, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1077)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    My thesis discusses two related problems that have taken center stage in the recent literature on concepts: 1) What are the individuation conditions of concepts? Under what conditions is a concept Cv(1) the same concept as a concept Cv(2)? 2) What are the possession conditions of concepts? What conditions must be satisfied for a thinker to have a concept C? The thesis defends a novel account of concepts, which I call "pluralist-contextualist": 1) Pluralism: Different concepts have different kinds of individuation and possession conditions: some concepts are individuated more "coarsely", have less demanding possession conditions and are widely shared, while other concepts are individuated more "finely" and not shared. 2) Contextualism: When a speaker ascribes a propositional attitude to a subject S, or uses his ascription to explain/predict S's behavior, the speaker's intentions in the relevant context determine the correct individuation conditions for the concepts involved in his report. In chapters 1-3 I defend a contextualist, non-Millian theory of propositional attitude ascriptions. Then, I show how contextualism can be used to offer a novel perspective on the problem of concept individuation/possession. More specifically, I employ contextualism to provide a new, more effective argument for Fodor's "publicity principle": if contextualism is true, then certain specific concepts must be shared in order for interpersonally applicable psychological generalizations to be possible. In chapters 4-5 I raise a tension between publicity and another widely endorsed principle, the "Fregean constraint" (FC): subjects who are unaware of certain identity facts and find themselves in so-called "Frege cases" must have distinct concepts for the relevant object x. For instance: the ancient astronomers had distinct concepts (HESPERUS/PHOSPHORUS) for the same object (the planet Venus). First, I examine some leading theories of concepts and argue that they cannot meet both of our constraints at the same time. Then, I offer principled reasons to think that no theory can satisfy (FC) while also respecting publicity. (FC) appears to require a form of holism, on which a concept is individuated by its global inferential role in a subject S and can thus only be shared by someone who has exactly the same inferential dispositions as S. This explains the tension between publicity and (FC), since holism is clearly incompatible with concept shareability. To solve the tension, I suggest adopting my pluralist-contextualist proposal: concepts involved in Frege cases are holistically individuated and not public, while other concepts are more coarsely individuated and widely shared; given this "plurality" of concepts, we will then need contextual factors (speakers' intentions) to "select" the specific concepts to be employed in our intentional generalizations in the relevant contexts. In chapter 6 I develop the view further by contrasting it with some rival accounts. First, I examine a very different kind of pluralism about concepts, which has been recently defended by Daniel Weiskopf, and argue that it is insufficiently radical. Then, I consider the inferentialist accounts defended by authors like Peacocke, Rey and Jackson. Such views, I argue, are committed to an implausible picture of reference determination, on which our inferential dispositions fix the reference of our concepts: this leads to wrong predictions in all those cases of scientific disagreement where two parties have very different inferential dispositions and yet seem to refer to the same natural kind.
  17. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.00
    0.004628441 = product of:
      0.009256882 = sum of:
        0.009256882 = product of:
          0.018513763 = sum of:
            0.018513763 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.018513763 = score(doc=1184,freq=2.0), product of:
                0.15950468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045548957 = queryNorm
                0.116070345 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.12.2011 14:08:22
  18. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.00
    0.004628441 = product of:
      0.009256882 = sum of:
        0.009256882 = product of:
          0.018513763 = sum of:
            0.018513763 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.018513763 = score(doc=405,freq=2.0), product of:
                0.15950468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045548957 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  19. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.00
    0.0037424145 = product of:
      0.007484829 = sum of:
        0.007484829 = product of:
          0.014969658 = sum of:
            0.014969658 = weight(_text_:c in 1182) [ClassicSimilarity], result of:
              0.014969658 = score(doc=1182,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.09527725 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
  20. Lagoze, C.: Keeping Dublin Core simple : Cross-domain discovery or resource description? (2001) 0.00
    0.0037424145 = product of:
      0.007484829 = sum of:
        0.007484829 = product of:
          0.014969658 = sum of:
            0.014969658 = weight(_text_:c in 1216) [ClassicSimilarity], result of:
              0.014969658 = score(doc=1216,freq=2.0), product of:
                0.15711682 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045548957 = queryNorm
                0.09527725 = fieldWeight in 1216, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1216)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Authors

Years

Types

  • a 82
  • s 4
  • m 3
  • n 2
  • r 2
  • x 2
  • b 1
  • i 1
  • p 1
  • More… Less…