Search (5 results, page 1 of 1)

  • × author_ss:"Nicholson, S."
  1. Nicholson, S.: Raising reliability of Web search tool research through replication and chaos theory (2000) 0.00
    0.003606434 = product of:
      0.014425736 = sum of:
        0.014425736 = weight(_text_:information in 4806) [ClassicSimilarity], result of:
          0.014425736 = score(doc=4806,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23515764 = fieldWeight in 4806, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4806)
      0.25 = coord(1/4)
    
    Abstract
    Because the WWW is a dynamic collection of information, the Web search tools (or 'search engines') that index the Web are dynamic. Traditional information retrieval evaluation techniques may not provide reliable results when applied to the Web search tools. This study is the result of 10 replications of the 1996 classic Ding and Marchionini Web search tool research. It explores the effects that replication can have on transforming unreliable results from one iteration into replicable and therefore reliable results following multile iterations
    Source
    Journal of the American Society for Information Science. 51(2000) no.8, S.724-729
  2. Nicholson, S.; Smith, C.A.: Using lessons from health care to protect the privacy of library users : guidelines for the de-identification of library data based on HIPAA (2007) 0.00
    0.0025239778 = product of:
      0.010095911 = sum of:
        0.010095911 = weight(_text_:information in 451) [ClassicSimilarity], result of:
          0.010095911 = score(doc=451,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16457605 = fieldWeight in 451, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=451)
      0.25 = coord(1/4)
    
    Abstract
    Although libraries have employed policies to protect the data about use of their services, these policies are rarely specific or standardized. Since 1996, the U.S. health care system has been grappling with the Health Insurance Portability and Accountability Act (HIPAA; Health Insurance Portability and Accountability Act, 1996), which is designed to provide those handling personal health information with standardized, definitive instructions as to the protection of data. In this work, the authors briefly discuss the present situation of privacy policies about library use data, outline the HIPAA guidelines to understand parallels between the two, and finally propose methods to create a de-identified library data warehouse based on HIPAA for the protection of user privacy.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.8, S.1198-1206
  3. Nicholson, S.: Bibliomining for automated collection development in a digital library setting : using data mining to discover Web-based scholarly research works (2003) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 1867) [ClassicSimilarity], result of:
          0.008413259 = score(doc=1867,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 1867, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1867)
      0.25 = coord(1/4)
    
    Abstract
    This research creates an intelligent agent for automated collection development in a digital library setting. It uses a predictive model based an facets of each Web page to select scholarly works. The criteria came from the academic library selection literature, and a Delphi study was used to refine the list to 41 criteria. A Perl program was designed to analyze a Web page for each criterion and applied to a large collection of scholarly and nonscholarly Web pages. Bibliomining, or data mining for libraries, was then used to create different classification models. Four techniques were used: logistic regression, nonparametric discriminant analysis, classification trees, and neural networks. Accuracy and return were used to judge the effectiveness of each model an test datasets. In addition, a set of problematic pages that were difficult to classify because of their similarity to scholarly research was gathered and classified using the models. The resulting models could be used in the selection process to automatically create a digital library of Webbased scholarly research works. In addition, the technique can be extended to create a digital library of any type of structured electronic information.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.12, S.1081-1090
  4. Pomerantz, J.; Nicholson, S.; Belanger, Y.; Lankes, R.D.: ¬The current state of digital reference : validation of a general digital reference model through a survey of digital reference services (2004) 0.00
    0.0017847219 = product of:
      0.0071388874 = sum of:
        0.0071388874 = weight(_text_:information in 2562) [ClassicSimilarity], result of:
          0.0071388874 = score(doc=2562,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.116372846 = fieldWeight in 2562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2562)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 40(2004) no.2, S.347-363
  5. Nicholson, S.; Sierra, T.; Eseryel, U.Y.; Park, J.-H.; Barkow, P.; Pozo, E.J.; Ward, J.: How much of it is real? : analysis of paid placement in Web search engine results (2006) 0.00
    0.0017847219 = product of:
      0.0071388874 = sum of:
        0.0071388874 = weight(_text_:information in 5278) [ClassicSimilarity], result of:
          0.0071388874 = score(doc=5278,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.116372846 = fieldWeight in 5278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5278)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.4, S.448-461