Search (5839 results, page 292 of 292)

  • × language_ss:"e"
  1. Lipow, A.G.: ¬The virtual reference librarian's handbook (2003) 0.00
    0.0025342298 = product of:
      0.0050684595 = sum of:
        0.0050684595 = product of:
          0.015205379 = sum of:
            0.015205379 = weight(_text_:22 in 3992) [ClassicSimilarity], result of:
              0.015205379 = score(doc=3992,freq=2.0), product of:
                0.15720168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09672529 = fieldWeight in 3992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3992)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 3.2004 14:46:50
  2. Plieninger, J.: Vermischtes und noch mehr ... : Ein Essay über die (vergebliche) Nutzung bibliothekarischer Erschließungssysteme in der neuen digitalen Ordnung (2007) 0.00
    0.0025342298 = product of:
      0.0050684595 = sum of:
        0.0050684595 = product of:
          0.015205379 = sum of:
            0.015205379 = weight(_text_:22 in 680) [ClassicSimilarity], result of:
              0.015205379 = score(doc=680,freq=2.0), product of:
                0.15720168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09672529 = fieldWeight in 680, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=680)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    4.11.2007 13:22:29
  3. Metoyer, C.A.; Doyle, A.M.: Introduction to a speicial issue on "Indigenous Knowledge Organization" (2015) 0.00
    0.0025342298 = product of:
      0.0050684595 = sum of:
        0.0050684595 = product of:
          0.015205379 = sum of:
            0.015205379 = weight(_text_:22 in 2186) [ClassicSimilarity], result of:
              0.015205379 = score(doc=2186,freq=2.0), product of:
                0.15720168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09672529 = fieldWeight in 2186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2186)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    26. 8.2015 19:22:31
  4. Cross-language information retrieval (1998) 0.00
    0.0024589198 = product of:
      0.0049178395 = sum of:
        0.0049178395 = product of:
          0.014753518 = sum of:
            0.014753518 = weight(_text_:c in 6299) [ClassicSimilarity], result of:
              0.014753518 = score(doc=6299,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09527725 = fieldWeight in 6299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
  5. Schwartz, C.: Sorting out the Web : approaches to subject access (2001) 0.00
    0.0024589198 = product of:
      0.0049178395 = sum of:
        0.0049178395 = product of:
          0.014753518 = sum of:
            0.014753518 = weight(_text_:c in 2050) [ClassicSimilarity], result of:
              0.014753518 = score(doc=2050,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09527725 = fieldWeight in 2050, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2050)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  6. Arlt, H.-J.; Prange, C.: Gut, dass wir gesprochen haben : Im Reformprozess von Organisationen kommt der Kommunikation eine Schlüsselrolle zu (2005) 0.00
    0.0024589198 = product of:
      0.0049178395 = sum of:
        0.0049178395 = product of:
          0.014753518 = sum of:
            0.014753518 = weight(_text_:c in 3322) [ClassicSimilarity], result of:
              0.014753518 = score(doc=3322,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09527725 = fieldWeight in 3322, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3322)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  7. Information systems and the economies of innovation (2003) 0.00
    0.0024589198 = product of:
      0.0049178395 = sum of:
        0.0049178395 = product of:
          0.014753518 = sum of:
            0.014753518 = weight(_text_:c in 3586) [ClassicSimilarity], result of:
              0.014753518 = score(doc=3586,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09527725 = fieldWeight in 3586, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3586)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Editor
    Avgerou, C. u. R.L. La Rovere
  8. Theory of subject analysis : A sourcebook (1985) 0.00
    0.0024589198 = product of:
      0.0049178395 = sum of:
        0.0049178395 = product of:
          0.014753518 = sum of:
            0.014753518 = weight(_text_:c in 3622) [ClassicSimilarity], result of:
              0.014753518 = score(doc=3622,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09527725 = fieldWeight in 3622, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3622)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Eine exzellente (und durch die Herausgeber kommentierte) Zusammenstellung und Wiedergabe folgender Originalbeiträge: CUTTER, C.A.: Subjects; DEWEY, M.: Decimal classification and relativ index: introduction; HOPWOOD, H.V.: Dewey expanded; HULME, E.W.: Principles of book classification; KAISER, J.O.: Systematic indexing; MARTEL, C.: Classification: a brief conspectus of present day library practice; BLISS, H.E.: A bibliographic classification: principles and definitions; RANGANATHAN, S.R.: Facet analysis: fundamental categories; PETTEE, J.: The subject approach to books and the development of the dictionary catalog; PETTEE, J.: Fundamental principles of the dictionary catalog; PETTEE, J.: Public libraries and libraries as purveyors of information; HAYKIN, D.J.: Subject headings: fundamental concepts; TAUBE, M.: Functional approach to bibliographic organization: a critique and a proposal; VICKERY, B.C.: Systematic subject indexing; FEIBLEMAN, J.K.: Theory of integrative levels; GARFIELD, E.: Citation indexes for science; CRG: The need for a faceted classification as the basis of all methods of information retrieval; LUHN, H.P.: Keyword-in-context index for technical literature; COATES, E.J.: Significance and term relationship in compound headings; FARRADANE, J.E.L.: Fundamental fallacies and new needs in classification; FOSKETT, D.J.: Classification and integrative levels; CLEVERDON, C.W. u. J. MILLS: The testing of index language devices; MOOERS, C.N.: The indexing language of an information retrieval system; NEEDHAM, R.M. u. K. SPARCK JONES: Keywords and clumps; ROLLING, L.: The role of graphic display of concept relationships in indexing and retrieval vocabularies; BORKO, H.: Research in computer based classification systems; WILSON, P.: Subjects and the sense of position; LANCASTER, F.W.: Evaluating the performance of a large computerized information system; SALTON, G.: Automatic processing of foreign language documents; FAIRTHORNE, R.A.: Temporal structure in bibliographic classification; AUSTIN, D. u. J.A. DIGGER: PRECIS: The Preserved Context Index System; FUGMANN, R.: The complementarity of natural and indexing languages
  9. Hirko, B.; Ross, M.B.: Virtual reference training : the complete guide to providing anytime anywhere answers (2004) 0.00
    0.0024589198 = product of:
      0.0049178395 = sum of:
        0.0049178395 = product of:
          0.014753518 = sum of:
            0.014753518 = weight(_text_:c in 4818) [ClassicSimilarity], result of:
              0.014753518 = score(doc=4818,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09527725 = fieldWeight in 4818, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4818)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    The real core of the SRVP, that is, the VRT training activities actually utilized by Washington State, are presented in Chapter Seven in roughly the same order as they took place in the course (train the trainer, orientation, chat practice, multitasking skills, virtual field trips, secret patron, transcript review, checking out the competition, policy and procedure review, sharing via a discussion list, and online meetings). Most interestingly, Chapter Eight deals with behavior, an issue rarely discussed in the context of librarianship, let alone providing reference services. As stated by the authors, "the most difficult aspect of digital reference service involves incorporating model reference interview techniques into an online transaction" (p. 74). The SVRP utilized an "online secret patron scenario" as a training tool that helped the student get the question straight, kept the customer informed, and provided the information required by the patron. The final chapter of the book reviews the important tasks of evaluation, modification, and follow-up. To that end, evaluative material is described and linked to Appendix A (assessment tools). In addition, evaluative tasks such as trainer debriefings and consultation with others participating in the SVRP are described. Finally, the chapter includes examples of unexpected consequences experienced in evaluating VRT services (from total inability to handle online transactions to poor marketing or branding of online services). Many useful appendices are included in this book. Appendix A provides examples of several assessment tools used during the "Anytime, Anywhere Answers" program. Appendix B consists of actual transcripts (edited) designed to illustrate good and bad virtual reference transactions. The transcripts illustrate transactions involving helping with homework, source citing, providing an opinion, suggesting print materials, and clarifying a question. This appendix should be required reading as it provides real-world examples of VRT in action. Appendix C is a copy of a VRT field trip questionnaire. The next appendix, like Appendix B, should be required reading as it includes an actual transcript from seven secret patron scenarios. A policies and procedures checklist is provided in Appendix E. Yet another critical source of information is presented in Appendix F, online meeting transcript. This transcript is the result of an online meeting conducted during a VRSP training class held in 2003. According to the authors, it is an example of the positive working relationship developed during a five-week learning course. The remaining appendices (G through 1) present information about support materials used in the VSRP, the VSRP budget, and trainer notes and tips. Clearly, VRT is a skill and resource that information professionals need to embrace, and this book does a fine job of outlining the essentials. It is apparent that the Washington State experience with VRT was a pioneering venture and is a model that other information professionals may seek to embrace, if not emulate, in developing their own VRT programs. However, this book is not a "complete guide" to VRT. There is too rapid development in virtual environments for any one to claim such an achievement. However, it is likely the most "complete" guide to the Washington State experience that will be published; therefore, this book should serve as a thorough and revelatory guide to VRT for several years to come."
  10. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.00
    0.0024589198 = product of:
      0.0049178395 = sum of:
        0.0049178395 = product of:
          0.014753518 = sum of:
            0.014753518 = weight(_text_:c in 1182) [ClassicSimilarity], result of:
              0.014753518 = score(doc=1182,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09527725 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
  11. Lagoze, C.: Keeping Dublin Core simple : Cross-domain discovery or resource description? (2001) 0.00
    0.0024589198 = product of:
      0.0049178395 = sum of:
        0.0049178395 = product of:
          0.014753518 = sum of:
            0.014753518 = weight(_text_:c in 1216) [ClassicSimilarity], result of:
              0.014753518 = score(doc=1216,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09527725 = fieldWeight in 1216, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1216)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  12. Facets: a fruitful notion in many domains : special issue on facet analysis (2008) 0.00
    0.0024589198 = product of:
      0.0049178395 = sum of:
        0.0049178395 = product of:
          0.014753518 = sum of:
            0.014753518 = weight(_text_:c in 3262) [ClassicSimilarity], result of:
              0.014753518 = score(doc=3262,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09527725 = fieldWeight in 3262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3262)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Editor
    Gnoli, C.
  13. Gonzalez, L.: What is FRBR? (2005) 0.00
    0.0023519443 = product of:
      0.0047038887 = sum of:
        0.0047038887 = product of:
          0.014111666 = sum of:
            0.014111666 = weight(_text_:i in 3401) [ClassicSimilarity], result of:
              0.014111666 = score(doc=3401,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.083344236 = fieldWeight in 3401, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3401)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    What are these two Beowulf translations "expressions" of? I used the term work above, an even more abstract concept in the FRBR model. In this case, the "work" is Beowulf , that ancient intellectual creation or effort that over time has been expressed in multiple ways, each manifested in several different ways itself, with one or more items in each manifestation. This is a pretty gross oversimplification of FRBR, which also details other relationships: among these entities; between these entities and various persons (such as creators, publishers, and owners); and between these entities and their subjects. It also specifies characteristics, or "attributes," of the different types of entities (such as title, physical media, date, availability, and more.). But it should be enough to grasp the possibilities. Now apply it Imagine that you have a patron who needs a copy of Heaney's translation of Beowulf . She doesn't care who published it or when, only that it's Heaney's translation. What if you (or your patron) could place an interlibrary loan call on that expression, instead of looking through multiple bibliographic records (as of March, OCLC's WorldCat had nine regular print editions) for multiple manifestations and then judging which record is the best bet on which to place a request? Combine that with functionality that lets you specify "not Braille, not large print," and it could save you time. Now imagine a patron in want of a copy, any copy, in English, of Romeo and Juliet. Saving staff time means saving money. Whether or not this actually happens depends upon what the library community decides to do with FRBR. It is not a set of cataloging rules or a system design, but it can influence both. Several library system vendors are working with FRBR ideas; VTLS's current integrated library system product Virtua incorporates FRBR concepts in its design. More vendors may follow. How the Joint Steering Committee for Revision of Anglo-American Cataloging Rules develops the Anglo-American Cataloging Rules (AACR) to incorporate FRBR will necessarily be a strong determinant of how records work in a "FRBR-ized" bibliographic database.
  14. Design and usability of digital libraries : case studies in the Asia-Pacific (2005) 0.00
    0.0023519443 = product of:
      0.0047038887 = sum of:
        0.0047038887 = product of:
          0.014111666 = sum of:
            0.014111666 = weight(_text_:i in 93) [ClassicSimilarity], result of:
              0.014111666 = score(doc=93,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.083344236 = fieldWeight in 93, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.015625 = fieldNorm(doc=93)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    The chapters are generally less than 20 pages, which allows for concise presentations of each case study. Each chapter contains, more or less, a brief abstract, introduction, related works section, methodology section, conclusion, and references. The chapters are further categorized into six thematic sections. Section I focuses on the history of digital libraries in the Asia Pacific. Section II, composed of four chapters, focuses on the design architecture and systems of digital libraries. The next five chapters, in section III, examine challenges in implementing digital library systems. This section is particularly interesting because issues such as multicultural and multilingual barriers are discussed. Section IV is about the use of and impact of digital libraries in a society. All four chapters in this section emphasize improvements that need to be made to digital libraries regarding different types of users. Particularly important is chapter 14, which discusses digital libraries and their effects on youth. The conclusion of this case study revealed that digital libraries need to support peer learning, as there are many social benefits for youth from interacting with peers. Section V, which focuses on users and usability, consists of five chapters. This section relates directly to the implementation challenges that are mentioned in section III, providing specific examples of cross-cultural issues among users that need to be taken into consideration. In addition, section V discusses the differences in media types and the difficulties with transforming these resources into digital formats. For example, chapter 18, which is about designing a music digital library, demonstrates the difficulties in selecting from the numerous types of technologies that can be used to digitize library collections. Finally, the chapter in section VI discusses the future trends of digital libraries. The editors successfully present diverse perspectives about digital libraries, by including case studies performed in numerous different countries throughout the Asia Pacific region. Countries represented in the case studies include Indonesia, Taiwan, India, China, Singapore, New Zealand, Hong Kong, Philippines, Japan, and Malaysia. The diversity of the users in these countries helps to illustrate the numerous differences and similarities that digital library designers need to take into consideration in the future when developing a universal digital library system. In order to create a successful digital library system that can benefit all users, there must be a sense of balance in the technology used, and the authors of the case studies in this book have definitely proved that there are distinct barriers that need to be overcome in order to achieve this harmony.
  15. Lambe, P.: Organising knowledge : taxonomies, knowledge and organisational effectiveness (2007) 0.00
    0.0023519443 = product of:
      0.0047038887 = sum of:
        0.0047038887 = product of:
          0.014111666 = sum of:
            0.014111666 = weight(_text_:i in 1804) [ClassicSimilarity], result of:
              0.014111666 = score(doc=1804,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.083344236 = fieldWeight in 1804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1804)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    While each single paragraph of the book is packed with valuable advice and real-life experience, I consider the last chapter to be the most intriguing and ground-breaking one. It's only here that taxonomists meet folksonomists and ontologists in a fundamental attempt to write a new page on the relative position between old and emerging classification techniques. In a well-balanced and sober analysis that foregoes excessive enthusiasm in favor of more appropriate considerations about content scale, domain maturity, precision and cost, knowledge infrastructure tools are all arrayed from inexpensive and expressive folksonomies on one side, to the smart, formal, machine-readable but expensive world of ontologies on the other. In light of so many different tools, information infrastructure clearly appears more as a complex dynamic ecosystem than a static overly designed environment. Such a variety of tasks, perspectives, work activities and paradigms calls for a resilient, adaptive and flexible knowledge environment with a minimum of standardization and uniformity. The right mix of tools and approaches can only be determined case by case, by carefully considering the particular objectives and requirements of the organization while aiming to maximize its overall performance and effectiveness. Starting from the history of taxonomy-building and ending with the emerging trends in Web technologies, artificial intelligence and social computing, Organising Knowledge is thus both a guiding tool and inspirational reading, not only about taxonomies, but also about effectiveness, collaboration and finding middle ground: exactly the right principles to make your intranet, portal or document management tool a rich, evolving and long-lasting ecosystem."
  16. Broughton, V.: Essential thesaurus construction (2006) 0.00
    0.0023519443 = product of:
      0.0047038887 = sum of:
        0.0047038887 = product of:
          0.014111666 = sum of:
            0.014111666 = weight(_text_:i in 2924) [ClassicSimilarity], result of:
              0.014111666 = score(doc=2924,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.083344236 = fieldWeight in 2924, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2924)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Weitere Rez. in: New Library World 108(2007) nos.3/4, S.190-191 (K.V. Trickey): "Vanda has provided a very useful work that will enable any reader who is prepared to follow her instruction to produce a thesaurus that will be a quality language-based subject access tool that will make the task of information retrieval easier and more effective. Once again I express my gratitude to Vanda for producing another excellent book." - Electronic Library 24(2006) no.6, S.866-867 (A.G. Smith): "Essential thesaurus construction is an ideal instructional text, with clear bullet point summaries at the ends of sections, and relevant and up to date references, putting thesauri in context with the general theory of information retrieval. But it will also be a valuable reference for any information professional developing or using a controlled vocabulary." - KO 33(2006) no.4, S.215-216 (M.P. Satija)
  17. Broughton, V.: Essential classification (2004) 0.00
    0.0023519443 = product of:
      0.0047038887 = sum of:
        0.0047038887 = product of:
          0.014111666 = sum of:
            0.014111666 = weight(_text_:i in 2824) [ClassicSimilarity], result of:
              0.014111666 = score(doc=2824,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.083344236 = fieldWeight in 2824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: KO 32(2005) no.1, S.47-49 (M. Hudon): "Vanda Broughton's Essential Classification is the most recent addition to a very small set of classification textbooks published over the past few years. The book's 21 chapters are based very closely an the cataloguing and classification module at the School of Library, Archive, and Information studies at University College, London. The author's main objective is clear: this is "first and foremost a book about how to classify. The emphasis throughout is an the activity of classification rather than the theory, the practical problems of the organization of collections, and the needs of the users" (p. 1). This is not a theoretical work, but a basic course in classification and classification scheme application. For this reviewer, who also teaches "Classification 101," this is also a fascinating peek into how a colleague organizes content and structures her course. "Classification is everywhere" (p. 1): the first sentence of this book is also one of the first statements in my own course, and Professor Broughton's metaphors - the supermarket, canned peas, flowers, etc. - are those that are used by our colleagues around the world. The combination of tone, writing style and content display are reader-friendly; they are in fact what make this book remarkable and what distinguishes it from more "formal" textbooks, such as The Organization of Information, the superb text written and recently updated (2004) by Professor Arlene Taylor (2nd ed. Westport, Conn.: Libraries Unlimited, 2004). Reading Essential Classification, at times, feels like being in a classroom, facing a teacher who assures you that "you don't need to worry about this at this stage" (p. 104), and reassures you that, although you now speed a long time looking for things, "you will soon speed up when you get to know the scheme better" (p. 137). This teacher uses redundancy in a productive fashion, and she is not afraid to express her own opinions ("I think that if these concepts are helpful they may be used" (p. 245); "It's annoying that LCC doesn't provide clearer instructions, but if you keep your head and take them one step at a time [i.e. the tables] they're fairly straightforward" (p. 174)). Chapters 1 to 7 present the essential theoretical concepts relating to knowledge organization and to bibliographic classification. The author is adept at making and explaining distinctions: known-item retrieval versus subject retrieval, personal versus public/shared/official classification systems, scientific versus folk classification systems, object versus aspect classification systems, semantic versus syntactic relationships, and so on. Chapters 8 and 9 discuss the practice of classification, through content analysis and subject description. A short discussion of difficult subjects, namely the treatment of unique concepts (persons, places, etc.) as subjects seems a little advanced for a beginners' class.
  18. Ewbank, L.: Crisis in subject cataloging and retrieval (1996) 0.00
    0.0020273838 = product of:
      0.0040547675 = sum of:
        0.0040547675 = product of:
          0.012164302 = sum of:
            0.012164302 = weight(_text_:22 in 5580) [ClassicSimilarity], result of:
              0.012164302 = score(doc=5580,freq=2.0), product of:
                0.15720168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044891298 = queryNorm
                0.07738023 = fieldWeight in 5580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=5580)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.90-97
  19. Information visualization in data mining and knowledge discovery (2002) 0.00
    0.0020273838 = product of:
      0.0040547675 = sum of:
        0.0040547675 = product of:
          0.012164302 = sum of:
            0.012164302 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.012164302 = score(doc=1789,freq=2.0), product of:
                0.15720168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044891298 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    23. 3.2008 19:10:22

Languages

Types

  • a 4928
  • m 529
  • el 301
  • s 277
  • i 159
  • b 50
  • r 34
  • x 12
  • p 8
  • n 6
  • d 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications