Search (29 results, page 1 of 2)

  • × theme_ss:"Literaturübersicht"
  • × year_i:[2000 TO 2010}
  1. Zhu, B.; Chen, H.: Information visualization (2004) 0.01
    0.0051370263 = product of:
      0.07191837 = sum of:
        0.07191837 = weight(_text_:mental in 4276) [ClassicSimilarity], result of:
          0.07191837 = score(doc=4276,freq=6.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.4375086 = fieldWeight in 4276, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
      0.071428575 = coord(1/14)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
  2. Downie, J.S.: Music information retrieval (2002) 0.00
    0.0025225044 = product of:
      0.03531506 = sum of:
        0.03531506 = weight(_text_:representation in 4287) [ClassicSimilarity], result of:
          0.03531506 = score(doc=4287,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 4287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=4287)
      0.071428575 = coord(1/14)
    
    Abstract
    Imagine a world where you walk up to a computer and sing the song fragment that has been plaguing you since breakfast. The computer accepts your off-key singing, corrects your request, and promptly suggests to you that "Camptown Races" is the cause of your irritation. You confirm the computer's suggestion by listening to one of the many MP3 files it has found. Satisfied, you kindly decline the offer to retrieve all extant versions of the song, including a recently released Italian rap rendition and an orchestral score featuring a bagpipe duet. Does such a system exist today? No. Will it in the future? Yes. Will such a system be easy to produce? Most decidedly not. Myriad difficulties remain to be overcome before the creation, deployment, and evaluation of robust, large-scale, and content-based Music Information Retrieval (MIR) systems become reality. The dizzyingly complex interaction of music's pitch, temporal, harmonic, timbral, editorial, textual, and bibliographic "facets," for example, demonstrates just one of MIR's perplexing problems. The choice of music representation-whether symbol-based, audio-based, or both-further compounds matters, as each choice determines bandwidth, computation, storage, retrieval, and interface requirements and capabilities.
  3. Yang, K.: Information retrieval on the Web (2004) 0.00
    0.0016816697 = product of:
      0.023543375 = sum of:
        0.023543375 = weight(_text_:representation in 4278) [ClassicSimilarity], result of:
          0.023543375 = score(doc=4278,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.20333713 = fieldWeight in 4278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=4278)
      0.071428575 = coord(1/14)
    
    Abstract
    How do we find information an the Web? Although information on the Web is distributed and decentralized, the Web can be viewed as a single, virtual document collection. In that regard, the fundamental questions and approaches of traditional information retrieval (IR) research (e.g., term weighting, query expansion) are likely to be relevant in Web document retrieval. Findings from traditional IR research, however, may not always be applicable in a Web setting. The Web document collection - massive in size and diverse in content, format, purpose, and quality - challenges the validity of previous research findings that are based an relatively small and homogeneous test collections. Moreover, some traditional IR approaches, although applicable in theory, may be impossible or impractical to implement in a Web setting. For instance, the size, distribution, and dynamic nature of Web information make it extremely difficult to construct a complete and up-to-date data representation of the kind required for a model IR system. To further complicate matters, information seeking on the Web is diverse in character and unpredictable in nature. Web searchers come from all walks of life and are motivated by many kinds of information needs. The wide range of experience, knowledge, motivation, and purpose means that searchers can express diverse types of information needs in a wide variety of ways with differing criteria for satisfying those needs. Conventional evaluation measures, such as precision and recall, may no longer be appropriate for Web IR, where a representative test collection is all but impossible to construct. Finding information on the Web creates many new challenges for, and exacerbates some old problems in, IR research. At the same time, the Web is rich in new types of information not present in most IR test collections. Hyperlinks, usage statistics, document markup tags, and collections of topic hierarchies such as Yahoo! (http://www.yahoo.com) present an opportunity to leverage Web-specific document characteristics in novel ways that go beyond the term-based retrieval framework of traditional IR. Consequently, researchers in Web IR have reexamined the findings from traditional IR research.
  4. Blair, D.C.: Information retrieval and the philosophy of language (2002) 0.00
    0.0016816697 = product of:
      0.023543375 = sum of:
        0.023543375 = weight(_text_:representation in 4283) [ClassicSimilarity], result of:
          0.023543375 = score(doc=4283,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.20333713 = fieldWeight in 4283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=4283)
      0.071428575 = coord(1/14)
    
    Abstract
    Information retrieval - the retrieval, primarily, of documents or textual material - is fundamentally a linguistic process. At the very least we must describe what we want and match that description with descriptions of the information that is available to us. Furthermore, when we describe what we want, we must mean something by that description. This is a deceptively simple act, but such linguistic events have been the grist for philosophical analysis since Aristotle. Although there are complexities involved in referring to authors, document types, or other categories of information retrieval context, here I wish to focus an one of the most problematic activities in information retrieval: the description of the intellectual content of information items. And even though I take information retrieval to involve the description and retrieval of written text, what I say here is applicable to any information item whose intellectual content can be described for retrieval-books, documents, images, audio clips, video clips, scientific specimens, engineering schematics, and so forth. For convenience, though, I will refer only to the description and retrieval of documents. The description of intellectual content can go wrong in many obvious ways. We may describe what we want incorrectly; we may describe it correctly but in such general terms that its description is useless for retrieval; or we may describe what we want correctly, but misinterpret the descriptions of available information, and thereby match our description of what we want incorrectly. From a linguistic point of view, we can be misunderstood in the process of retrieval in many ways. Because the philosophy of language deals specifically with how we are understood and mis-understood, it should have some use for understanding the process of description in information retrieval. First, however, let us examine more closely the kinds of misunderstandings that can occur in information retrieval. We use language in searching for information in two principal ways. We use it to describe what we want and to discriminate what we want from other information that is available to us but that we do not want. Description and discrimination together articulate the goals of the information search process; they also delineate the two principal ways in which language can fail us in this process. Van Rijsbergen (1979) was the first to make this distinction, calling them "representation" and "discrimination.""
  5. Fox, E.A.; Urs, S.R.: Digital libraries (2002) 0.00
    0.0016816697 = product of:
      0.023543375 = sum of:
        0.023543375 = weight(_text_:representation in 4299) [ClassicSimilarity], result of:
          0.023543375 = score(doc=4299,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.20333713 = fieldWeight in 4299, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=4299)
      0.071428575 = coord(1/14)
    
    Abstract
    The emergence of digital libraries (DLs), at the interface of library and information science with computer and communication technologies, helped to expand significantly the literature in all of these areas during the late 1990s. The pace of development is reflected by the number of special issues of major journals in information science and computer science, and the increasing number of workshops and conferences an digital libraries. For example, starting in 1995, the Communications of the ACM has devoted three special issues to the topic (Fox, Akscyn, Furuta, & Leggett, 1995; Fox & Marchionini, 1998, 2001). The Journal of the American Society for Information Science devoted two issues to digital libraries (H. Chen, 2000; Fox & Lunin, 1993); Information Processing & Management and the Journal of Visual Communication and Image Representation each had one special issue (Chen & Fox, 1996; Marchionini & Fox, 1999). The domain of digital libraries, though still evolving, has matured over the last decade, as demonstrated by coverage through D-Lib (http://www.dlib.org), the International Journal an Digital Libraries (http://link.springer.de/link/service/journals/00799), and two overview works (W Y Arms, 2000; Lesk, 1997; both of which have also served as textbooks). Sun Microsystems published a small book to guide those planning a digital library (Noerr, 2000), and IBM has been developing commercial products for digital libraries since 1994 (IBM, 2000). A number of Web sites have extensive sets of pointers to information an DLs (D-Lib Forum, 2001; Fox, 1998a; Habing, 1998; Hein, 2000; Schwartz, 2001a, 2001b). Further, the field has attracted the attention of diverse academics, research groups, and practitionersmany of whom have attended tutorials, workshops, or conferences, e.g., the Joint Conference an Digital Libraries, which is a sequel to a separate series run by ACM and IEEE-CS. Therefore, it is timely that ARIST publishes this first review focusing specifically an digital libraries. There has been no ARIST chapter to date directly dealing with the area of DLs, though some related domains have been covered-particularly: information retrieval, user interfaces (Marchionini & Komlodi, 1998), social informatics of DLs (Bishop & Star, 1996), and scholarly communication (see Borgman and Furner's chapter in this volume). This chapter provides an overview of the diverse aspects and dimensions of DL research, practice, and literature, identifying trends and delineating research directions.
  6. Legg, C.: Ontologies on the Semantic Web (2007) 0.00
    0.0016816697 = product of:
      0.023543375 = sum of:
        0.023543375 = weight(_text_:representation in 1979) [ClassicSimilarity], result of:
          0.023543375 = score(doc=1979,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.20333713 = fieldWeight in 1979, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=1979)
      0.071428575 = coord(1/14)
    
    Abstract
    As an informational technology, the World Wide Web has enjoyed spectacular success. In just ten years it has transformed the way information is produced, stored, and shared in arenas as diverse as shopping, family photo albums, and high-level academic research. The "Semantic Web" is touted by its developers as equally revolutionary, although it has not yet achieved anything like the Web's exponential uptake. It seeks to transcend a current limitation of the Web - that it largely requires indexing to be accomplished merely on specific character strings. Thus, a person searching for information about "turkey" (the bird) receives from current search engines many irrelevant pages about "Turkey" (the country) and nothing about the Spanish "pavo" even if he or she is a Spanish-speaker able to understand such pages. The Semantic Web vision is to develop technology to facilitate retrieval of information via meanings, not just spellings. For this to be possible, most commentators believe, Semantic Web applications will have to draw on some kind of shared, structured, machine-readable conceptual scheme. Thus, there has been a convergence between the Semantic Web research community and an older tradition with roots in classical Artificial Intelligence (AI) research (sometimes referred to as "knowledge representation") whose goal is to develop a formal ontology. A formal ontology is a machine-readable theory of the most fundamental concepts or "categories" required in order to understand information pertaining to any knowledge domain. A review of the attempts that have been made to realize this goal provides an opportunity to reflect in interestingly concrete ways on various research questions such as the following: - How explicit a machine-understandable theory of meaning is it possible or practical to construct? - How universal a machine-understandable theory of meaning is it possible or practical to construct? - How much (and what kind of) inference support is required to realize a machine-understandable theory of meaning? - What is it for a theory of meaning to be machine-understandable anyway?
  7. Martin, B.: Knowledge management (2008) 0.00
    0.0013106616 = product of:
      0.01834926 = sum of:
        0.01834926 = product of:
          0.055047777 = sum of:
            0.055047777 = weight(_text_:29 in 4230) [ClassicSimilarity], result of:
              0.055047777 = score(doc=4230,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.6218451 = fieldWeight in 4230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=4230)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    13. 7.2008 9:29:38
  8. Priss, U.: Formal concept analysis in information science (2006) 0.00
    0.0013106616 = product of:
      0.01834926 = sum of:
        0.01834926 = product of:
          0.055047777 = sum of:
            0.055047777 = weight(_text_:29 in 4305) [ClassicSimilarity], result of:
              0.055047777 = score(doc=4305,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.6218451 = fieldWeight in 4305, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=4305)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    13. 7.2008 19:29:59
  9. Enser, P.G.B.: Visual image retrieval (2008) 0.00
    0.0012988712 = product of:
      0.018184196 = sum of:
        0.018184196 = product of:
          0.05455259 = sum of:
            0.05455259 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.05455259 = score(doc=3281,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.61904186 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3281)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    22. 1.2012 13:01:26
  10. Morris, S.A.: Mapping research specialties (2008) 0.00
    0.0012988712 = product of:
      0.018184196 = sum of:
        0.018184196 = product of:
          0.05455259 = sum of:
            0.05455259 = weight(_text_:22 in 3962) [ClassicSimilarity], result of:
              0.05455259 = score(doc=3962,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.61904186 = fieldWeight in 3962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3962)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    13. 7.2008 9:30:22
  11. Fallis, D.: Social epistemology and information science (2006) 0.00
    0.0012988712 = product of:
      0.018184196 = sum of:
        0.018184196 = product of:
          0.05455259 = sum of:
            0.05455259 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
              0.05455259 = score(doc=4368,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.61904186 = fieldWeight in 4368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4368)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    13. 7.2008 19:22:28
  12. Nicolaisen, J.: Citation analysis (2007) 0.00
    0.0012988712 = product of:
      0.018184196 = sum of:
        0.018184196 = product of:
          0.05455259 = sum of:
            0.05455259 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.05455259 = score(doc=6091,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    13. 7.2008 19:53:22
  13. Khoo, S.G.; Na, J.-C.: Semantic relations in information science (2006) 0.00
    0.0012612522 = product of:
      0.01765753 = sum of:
        0.01765753 = weight(_text_:representation in 1978) [ClassicSimilarity], result of:
          0.01765753 = score(doc=1978,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.15250285 = fieldWeight in 1978, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1978)
      0.071428575 = coord(1/14)
    
    Abstract
    Linguists in the structuralist tradition (e.g., Lyons, 1977; Saussure, 1959) have asserted that concepts cannot be defined on their own but only in relation to other concepts. Semantic relations appear to reflect a logical structure in the fundamental nature of thought (Caplan & Herrmann, 1993). Green, Bean, and Myaeng (2002) noted that semantic relations play a critical role in how we represent knowledge psychologically, linguistically, and computationally, and that many systems of knowledge representation start with a basic distinction between entities and relations. Green (2001, p. 3) said that "relationships are involved as we combine simple entities to form more complex entities, as we compare entities, as we group entities, as one entity performs a process on another entity, and so forth. Indeed, many things that we might initially regard as basic and elemental are revealed upon further examination to involve internal structure, or in other words, internal relationships." Concepts and relations are often expressed in language and text. Language is used not just for communicating concepts and relations, but also for representing, storing, and reasoning with concepts and relations. We shall examine the nature of semantic relations from a linguistic and psychological perspective, with an emphasis on relations expressed in text. The usefulness of semantic relations in information science, especially in ontology construction, information extraction, information retrieval, question-answering, and text summarization is discussed. Research and development in information science have focused on concepts and terms, but the focus will increasingly shift to the identification, processing, and management of relations to achieve greater effectiveness and refinement in information science techniques. Previous chapters in ARIST on natural language processing (Chowdhury, 2003), text mining (Trybula, 1999), information retrieval and the philosophy of language (Blair, 2003), and query expansion (Efthimiadis, 1996) provide a background for this discussion, as semantic relations are an important part of these applications.
  14. Dumais, S.T.: Latent semantic analysis (2003) 0.00
    0.0012612522 = product of:
      0.01765753 = sum of:
        0.01765753 = weight(_text_:representation in 2462) [ClassicSimilarity], result of:
          0.01765753 = score(doc=2462,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.15250285 = fieldWeight in 2462, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2462)
      0.071428575 = coord(1/14)
    
    Abstract
    With the advent of large-scale collections of full text, statistical approaches are being used more and more to analyze the relationships among terms and documents. LSA takes this approach. LSA induces knowledge about the meanings of documents and words by analyzing large collections of texts. The approach simultaneously models the relationships among documents based an their constituent words, and the relationships between words based an their occurrence in documents. By using fewer dimensions for representation than there are unique words, LSA induces similarities among terms that are useful in solving the information retrieval problems described earlier. LSA is a fully automatic statistical approach to extracting relations among words by means of their contexts of use in documents, passages, or sentences. It makes no use of natural language processing techniques for analyzing morphological, syntactic, or semantic relations. Nor does it use humanly constructed resources like dictionaries, thesauri, lexical reference systems (e.g., WordNet), semantic networks, or other knowledge representations. Its only input is large amounts of texts. LSA is an unsupervised learning technique. It starts with a large collection of texts, builds a term-document matrix, and tries to uncover some similarity structures that are useful for information retrieval and related text-analysis problems. Several recent ARIST chapters have focused an text mining and discovery (Benoit, 2002; Solomon, 2002; Trybula, 2000). These chapters provide complementary coverage of the field of text analysis.
  15. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.00
    0.0010510436 = product of:
      0.01471461 = sum of:
        0.01471461 = weight(_text_:representation in 2467) [ClassicSimilarity], result of:
          0.01471461 = score(doc=2467,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.12708572 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
      0.071428575 = coord(1/14)
    
    Abstract
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
  16. Bath, P.A.: Data mining in health and medical information (2003) 0.00
    6.553308E-4 = product of:
      0.00917463 = sum of:
        0.00917463 = product of:
          0.027523888 = sum of:
            0.027523888 = weight(_text_:29 in 4263) [ClassicSimilarity], result of:
              0.027523888 = score(doc=4263,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.31092256 = fieldWeight in 4263, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4263)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    23.10.2005 18:29:03
  17. Kim, K.-S.: Recent work in cataloging and classification, 2000-2002 (2003) 0.00
    6.494356E-4 = product of:
      0.009092098 = sum of:
        0.009092098 = product of:
          0.027276294 = sum of:
            0.027276294 = weight(_text_:22 in 152) [ClassicSimilarity], result of:
              0.027276294 = score(doc=152,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.30952093 = fieldWeight in 152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=152)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    10. 9.2000 17:38:22
  18. El-Sherbini, M.A.: Cataloging and classification : review of the literature 2005-06 (2008) 0.00
    6.494356E-4 = product of:
      0.009092098 = sum of:
        0.009092098 = product of:
          0.027276294 = sum of:
            0.027276294 = weight(_text_:22 in 249) [ClassicSimilarity], result of:
              0.027276294 = score(doc=249,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.30952093 = fieldWeight in 249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=249)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    10. 9.2000 17:38:22
  19. Miksa, S.D.: ¬The challenges of change : a review of cataloging and classification literature, 2003-2004 (2007) 0.00
    6.494356E-4 = product of:
      0.009092098 = sum of:
        0.009092098 = product of:
          0.027276294 = sum of:
            0.027276294 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
              0.027276294 = score(doc=266,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.30952093 = fieldWeight in 266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=266)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    10. 9.2000 17:38:22
  20. Nielsen, M.L.: Thesaurus construction : key issues and selected readings (2004) 0.00
    5.6825613E-4 = product of:
      0.007955586 = sum of:
        0.007955586 = product of:
          0.023866756 = sum of:
            0.023866756 = weight(_text_:22 in 5006) [ClassicSimilarity], result of:
              0.023866756 = score(doc=5006,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.2708308 = fieldWeight in 5006, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5006)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    18. 5.2006 20:06:22