Search (5672 results, page 284 of 284)

  • × language_ss:"e"
  • × year_i:[2000 TO 2010}
  1. Rowley, J.E.; Farrow, J.: Organizing knowledge : an introduction to managing access to information (2000) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 2463) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=2463,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 2463, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2463)
      0.16666667 = coord(1/6)
    
    Footnote
    Rez. in: BuB 53(2001) H.9, S.596 (J. Plieninger)
  2. Dorman, D.: ¬The potential of metasearching as an "open" service (2008) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 2599) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=2599,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 2599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2599)
      0.16666667 = coord(1/6)
    
    Footnote
    Beitrag in einem Themenheft "Information organization futures"
  3. Hu, P.J.-H.; Brown, S.A.; Thong, J.Y.L.; Chan, F.K.Y.; Tam, K.Y.: Determinants of service quality and continuance intention of online services : the case of eTax (2009) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 2716) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=2716,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 2716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2716)
      0.16666667 = coord(1/6)
    
    Abstract
    This article examines the determinants of service quality and continuance intention of online services. We proposed and empirically tested a model with both service and technology characteristics as the main drivers of service quality and subsequent continuance intention of eTax, an electronic government (eGovernment) service that enables citizens to file their taxes online. Our data were collected via a two-stage longitudinal online survey of 518 participants before and after they made use of the eTax service in Hong Kong. The results showed that both service characteristics (i.e., security and convenience) and one of the technology characteristics (i.e., perceived usefulness, but not perceived ease of use) were the key determinants of service quality. Another interesting and important finding that runs counter to the vast body of empirical evidence on predicting intention is that perceived usefulness was not the strongest predictor of continuance intention but rather service quality was. To provide a richer picture of these relationships, we also conducted a post-hoc analysis of the effects of service and technology characteristics on the individual dimensions of service quality and their subsequent impact on continuance intention and found assurance and reliability to be the only significant predictors of continuance intention. We present implications for research and practice related to online services.
  4. Carpineto, C.; Mizzaro, S.; Romano, G.; Snidero, M.: Mobile information retrieval with search results clustering : prototypes and evaluations (2009) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 2793) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=2793,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 2793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2793)
      0.16666667 = coord(1/6)
    
    Abstract
    Web searches from mobile devices such as PDAs and cell phones are becoming increasingly popular. However, the traditional list-based search interface paradigm does not scale well to mobile devices due to their inherent limitations. In this article, we investigate the application of search results clustering, used with some success for desktop computer searches, to the mobile scenario. Building on CREDO (Conceptual Reorganization of Documents), a Web clustering engine based on concept lattices, we present its mobile versions Credino and SmartCREDO, for PDAs and cell phones, respectively. Next, we evaluate the retrieval performance of the three prototype systems. We measure the effectiveness of their clustered results compared to a ranked list of results on a subtopic retrieval task, by means of the device-independent notion of subtopic reach time together with a reusable test collection built from Wikipedia ambiguous entries. Then, we make a cross-comparison of methods (i.e., clustering and ranked list) and devices (i.e., desktop, PDA, and cell phone), using an interactive information-finding task performed by external participants. The main finding is that clustering engines are a viable complementary approach to plain search engines both for desktop and mobile searches especially, but not only, for multitopic informational queries.
  5. Li, R.: ¬The representation of national political freedom on Web interface design : the indicators (2009) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 2856) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=2856,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 2856, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2856)
      0.16666667 = coord(1/6)
    
    Abstract
    This study is designed to validate 10 Power Distance indicators identified from previous research on cultural dimensions to establish a measurement for determining a country's national political freedom represented on Web content and interface design. Two coders performed content analysis on 156 college/university Web sites selected from 39 countries. One-way analysis of variance was applied to analyze each of the proposed 10 indicators to detect statistical significant differences among means of the three freedom groups (free-country group, partly-free-country group, and not-free-country group). The results indicated that 6 of the 10 proposed indicators could be used to measure a country's national political freedom on Web interface design. The seventh indicator, symmetric layout, demonstrated a negative correlation between the freedom level and the Web representation of Power Distance. The last three proposed indicators failed to show any significant differences among the treatment means, and there are no clear trend patterns for the treatment means of the three freedom groups. By examining national political freedom represented on Web pages, this study not only provides an insight into cultural dimensions and Web interface design but also advances our knowledge in sociological and cultural studies of the Web.
  6. Kim, P.J.; Lee, J.Y.; Park, J.-H.: Developing a new collection-evaluation method : mapping and the user-side h-index (2009) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3171) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3171,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3171)
      0.16666667 = coord(1/6)
    
    Abstract
    This study proposes a new visualization method and index for collection evaluation. Specifically, it develops a network-based mapping technique and a user-focused Hirsch index (user-side h-index) given the lack of previous studies on collection evaluation methods that have used the h-index. A user-side h-index is developed and compared with previous indices (use factor, difference of percentages, collection-side h-index) that represent the strengths of the subject classes of a library collection. The mapping procedure includes the subject-usage profiling of 63 subject classes and collection-usage map generations through the pathfinder network algorithm. Cluster analyses are then conducted upon the pathfinder network to generate 5 large and 14 small clusters. The nodes represent the strengths of the subject-class usages reflected by the user-side h-index. The user-side h-index was found to have advantages (e.g., better demonstrating the real utility of each subject class) over the other indices. It also can more clearly distinguish the strengths between the subject classes than can collection-side h-index. These results may help to identify actual usage and strengths of subject classes in library collections through visualized maps. This may be a useful rationale for the establishment of the collection-development plan.
  7. Wang, J.: ¬An extensive study on automated Dewey Decimal Classification (2009) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3172) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3172,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3172)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper, we present a theoretical analysis and extensive experiments on the automated assignment of Dewey Decimal Classification (DDC) classes to bibliographic data with a supervised machine-learning approach. Library classification systems, such as the DDC, impose great obstacles on state-of-art text categorization (TC) technologies, including deep hierarchy, data sparseness, and skewed distribution. We first analyze statistically the document and category distributions over the DDC, and discuss the obstacles imposed by bibliographic corpora and library classification schemes on TC technology. To overcome these obstacles, we propose an innovative algorithm to reshape the DDC structure into a balanced virtual tree by balancing the category distribution and flattening the hierarchy. To improve the classification effectiveness to a level acceptable to real-world applications, we propose an interactive classification model that is able to predict a class of any depth within a limited number of user interactions. The experiments are conducted on a large bibliographic collection created by the Library of Congress within the science and technology domains over 10 years. With no more than three interactions, a classification accuracy of nearly 90% is achieved, thus providing a practical solution to the automatic bibliographic classification problem.
  8. Shachaf, P.: ¬The paradox of expertise : is the Wikipedia Reference Desk as good as your library? (2009) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 3617) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=3617,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 3617, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3617)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - The purpose of this paper is to examine the quality of answers on the Wikipedia Reference Desk, and to compare it with library reference services. It aims to examine whether Wikipedia volunteers outperform expert reference librarians and exemplify the paradox of expertise. Design/methodology/approach - The study applied content analysis to a sample of 434 messages (77 questions and 357 responses) from the Wikipedia Reference Desk and focused on three SERVQUAL quality variables: reliability (accuracy, completeness, verifiability), responsiveness, and assurance. Findings - The study reports that on all three SERVQUAL measures quality of answers produced by the Wikipedia Reference Desk is comparable with that of library reference services. Research limitations/implications - The collaborative social reference model matched or outperformed the dyadic reference interview and should be further examined theoretically and empirically. The generalizability of the findings to other similar sites is questionable. Practical implications - Librarians and library science educators should examine the implications of the social reference on the future role of reference services. Originality/value - The study is the first to: examine the quality of the Wikipedia Reference Desk; extend research on Wikipedia quality; use SERVQUAL measures in evaluating Q&A sites; and compare Q&A sites with traditional reference services.
  9. Egghe, L.: Mathematical study of h-index sequences (2009) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 4217) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=4217,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 4217, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4217)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper studies mathematical properties of h-index sequences as developed by Liang [Liang, L. (2006). h-Index sequence and h-index matrix: Constructions and applications. Scientometrics, 69(1), 153-159]. For practical reasons, Liming studies such sequences where the time goes backwards while it is more logical to use the time going forward (real career periods). Both type of h-index sequences are studied here and their interrelations are revealed. We show cases where these sequences are convex, linear and concave. We also show that, when one of the sequences is convex then the other one is concave, showing that the reverse-time sequence, in general, cannot be used to derive similar properties of the (difficult to obtain) forward time sequence. We show that both sequences are the same if and only if the author produces the same number of papers per year. If the author produces an increasing number of papers per year, then Liang's h-sequences are above the "normal" ones. All these results are also valid for g- and R-sequences. The results are confirmed by the h-, g- and R-sequences (forward and reverse time) of the author.
  10. Spinning the Semantic Web : bringing the World Wide Web to its full potential (2003) 0.00
    7.3621154E-4 = product of:
      0.004417269 = sum of:
        0.004417269 = weight(_text_:in in 1981) [ClassicSimilarity], result of:
          0.004417269 = score(doc=1981,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07438892 = fieldWeight in 1981, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
      0.16666667 = coord(1/6)
    
    Abstract
    As the World Wide Web continues to expand, it becomes increasingly difficult for users to obtain information efficiently. Because most search engines read format languages such as HTML or SGML, search results reflect formatting tags more than actual page content, which is expressed in natural language. Spinning the Semantic Web describes an exciting new type of hierarchy and standardization that will replace the current "Web of links" with a "Web of meaning." Using a flexible set of languages and tools, the Semantic Web will make all available information - display elements, metadata, services, images, and especially content - accessible. The result will be an immense repository of information accessible for a wide range of new applications. This first handbook for the Semantic Web covers, among other topics, software agents that can negotiate and collect information, markup languages that can tag many more types of information in a document, and knowledge systems that enable machines to read Web pages and determine their reliability. The truly interdisciplinary Semantic Web combines aspects of artificial intelligence, markup languages, natural language processing, information retrieval, knowledge representation, intelligent agents, and databases.
  11. Blair, D.C.: ¬The challenge of commercial document retrieval : Part I: Major issues, and a framework based on search exhaustivity, determinacy of representation and document collection size (2002) 0.00
    5.949487E-4 = product of:
      0.0035696921 = sum of:
        0.0035696921 = weight(_text_:in in 2580) [ClassicSimilarity], result of:
          0.0035696921 = score(doc=2580,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.060115322 = fieldWeight in 2580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2580)
      0.16666667 = coord(1/6)
    
    Abstract
    With the growing focus on what is collectively known as "knowledge management", a shift continues to take place in commercial information system development: a shift away from the well-understood data retrieval/database model, to the more complex and challenging development of commercial document/information retrieval models. While document retrieval has had a long and rich legacy of research, its impact on commercial applications has been modest. At the enterprise level most large organizations have little understanding of, or commitment to, high quality document access and management. Part of the reason for this is that we still do not have a good framework for understanding the major factors which affect the performance of large-scale corporate document retrieval systems. The thesis of this discussion is that document retrieval - specifically, access to intellectual content - is a complex process which is most strongly influenced by three factors: the size of the document collection; the type of search (exhaustive, existence or sample); and, the determinacy of document representation. Collectively, these factors can be used to provide a useful framework for, or taxonomy of, document retrieval, and highlight some of the fundamental issues facing the design and development of commercial document retrieval systems. This is the first of a series of three articles. Part II (D.C. Blair, The challenge of commercial document retrieval. Part II. A strategy for document searching based on identifiable document partitions, Information Processing and Management, 2001b, this issue) will discuss the implications of this framework for search strategy, and Part III (D.C. Blair, Some thoughts on the reported results of Text REtrieval Conference (TREC), Information Processing and Management, 2002, forthcoming) will consider the importance of the TREC results for our understanding of operating information retrieval systems.
  12. Tredinnick, L.: Why Intranets fail (and how to fix them) : a practical guide for information professionals (2004) 0.00
    5.949487E-4 = product of:
      0.0035696921 = sum of:
        0.0035696921 = weight(_text_:in in 4499) [ClassicSimilarity], result of:
          0.0035696921 = score(doc=4499,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.060115322 = fieldWeight in 4499, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4499)
      0.16666667 = coord(1/6)
    
    Abstract
    This book is a practical guide to some of the common problems associated with Intranets, and solutions to those problems. The book takes a unique end-user perspective an the role of intranets within organisations. It explores how the needs of the end-user very often conflict with the needs of the organisation, creatiog a confusion of purpose that impedes the success of intranet. It sets out clearly why intranets cannot be thought of as merely internal Internets, and require their own management strategies and approaches. The book draws an a wide range of examples and analogies from a variety of contexts to set-out in a clear and concise way the issues at the heart of failing intranets. It presents step-by-step solutions with universal application. Each issue discussed is accompanied by short practical suggestions for improved intranet design and architecture.

Authors

Languages

  • d 22
  • m 3
  • es 2
  • f 1
  • ro 1
  • More… Less…

Types

  • a 4852
  • m 496
  • el 416
  • s 178
  • b 38
  • i 22
  • r 21
  • x 18
  • p 15
  • n 13
  • More… Less…

Themes

Subjects

Classifications