Search (3 results, page 1 of 1)

  • × author_ss:"Beghtol, C."
  • × year_i:[2000 TO 2010}
  1. Beghtol, C.: ¬The Iter Bibliography : International standard subject access to medieval and renaissance materials (400-1700) (2003) 0.01
    0.0075834226 = product of:
      0.022750268 = sum of:
        0.022750268 = product of:
          0.045500536 = sum of:
            0.045500536 = weight(_text_:indexing in 3965) [ClassicSimilarity], result of:
              0.045500536 = score(doc=3965,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23924173 = fieldWeight in 3965, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3965)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    "1. Iter: Gateway to the Middle Ages and Renaissance Iter is a non-profit research project dedicated to providing electronic access to all kinds and formats of materials pertaining to the Middle Ages and Renaissance (400-1700). Iter began in 1995 as a joint initiative of the Renaissance Society of America (RSA) in New York City and the Centre for Reformation and Renaissance Studies (CRRS), Univ. of Toronto. By 1997, three more partners had joined: Faculty of Information Studies (FIS), Univ. of Toronto; Arizona Center for Medieval and Renaissance Studies (ACMRS), Arizona State Univ. at Tempe; and John P. Robarts Library, Univ. of Toronto. Iter was funded initially by the five partners and major foundations and, since 1998, has offered low-cost subscriptions to institutions and individuals. When Iter becomes financially self-sufficient, any profits will be used to enhance and expand the project. Iter databases are housed and maintained at the John P. Robarts Library. The interface is a customized version of DRA WebZ. A new interface using DRA Web can be searched now and will replace the DRA WebZ interface shortly. Iter was originally conceived as a comprehensive bibliography of secondary materials that would be an alternative to the existing commercial research tools for its period. These were expensive, generally appeared several years late, had limited subject indexing, were inconsistent in coverage, of uneven quality, and often depended an fragile networks of volunteers for identification of materials. The production of a reasonably priced, web-based, timely research tool was Iter's first priority. In addition, the partners wanted to involve graduate students in the project in order to contribute to the scholarly training and financial support of future scholars of the Middle Ages and Renaissance and to utilize as much automation as possible."
    Source
    Subject retrieval in a networked environment: Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC. Ed.: I.C. McIlwaine
  2. Beghtol, C.: Naïve classification systems and the global information society (2004) 0.01
    0.005609659 = product of:
      0.016828977 = sum of:
        0.016828977 = product of:
          0.033657953 = sum of:
            0.033657953 = weight(_text_:22 in 3483) [ClassicSimilarity], result of:
              0.033657953 = score(doc=3483,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.19345059 = fieldWeight in 3483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3483)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Pages
    S.19-22
  3. Beghtol, C.: Response to Hjoerland and Nicolaisen (2004) 0.00
    0.0046920036 = product of:
      0.01407601 = sum of:
        0.01407601 = product of:
          0.02815202 = sum of:
            0.02815202 = weight(_text_:indexing in 3536) [ClassicSimilarity], result of:
              0.02815202 = score(doc=3536,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.14802328 = fieldWeight in 3536, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3536)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    I am writing to correct some of the misconceptions that Hjoerland and Nicolaisen appear to have about my paper in the previous issue of Knowledge Organization. I would like to address aspects of two of these misapprehensions. The first is the faulty interpretation they have given to my use of the term "naïve classification," and the second is the kinds of classification systems that they appear to believe are discussed in my paper as examples of "naïve classifications." First, the term "naïve classification" is directly analogous to the widely-understood and widelyaccepted term "naïve indexing." It is not analogous to the terms to which Hjorland and Nicolaisen compare it (i.e., "naïve physics", "naïve biology"). The term as I have defined it is not pejorative. It does not imply that the scholars who have developed naïve classifications have not given profoundly serious thought to their own scholarly work. My paper distinguishes between classifications for new knowledge developed by scholars in the various disciplines for the purposes of advancing disciplinary knowledge ("naïve classifications") and classifications for previously existing knowledge developed by information professionals for the purposes of creating access points in information retrieval systems ("professional classifications"). This distinction rests primarily an the purpose of the kind of classification system in question and only secondarily an the knowledge base of the scholars who have created it. Hjoerland and Nicolaisen appear to have misunderstood this point, which is made clearly and adequately in the title, in the abstract and throughout the text of my paper.