Search (96 results, page 1 of 5)

  • × language_ss:"e"
  • × type_ss:"el"
  • × year_i:[1990 TO 2000}
  1. Chen, H.: Semantic research for digital libraries (1999) 0.08
    0.07562129 = product of:
      0.15124258 = sum of:
        0.015624823 = weight(_text_:for in 1247) [ClassicSimilarity], result of:
          0.015624823 = score(doc=1247,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 1247, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=1247)
        0.13561776 = weight(_text_:computing in 1247) [ClassicSimilarity], result of:
          0.13561776 = score(doc=1247,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.51857525 = fieldWeight in 1247, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=1247)
      0.5 = coord(2/4)
    
    Abstract
    In this era of the Internet and distributed, multimedia computing, new and emerging classes of information systems applications have swept into the lives of office workers and people in general. From digital libraries, multimedia systems, geographic information systems, and collaborative computing to electronic commerce, virtual reality, and electronic video arts and games, these applications have created tremendous opportunities for information and computer science researchers and practitioners. As applications become more pervasive, pressing, and diverse, several well-known information retrieval (IR) problems have become even more urgent. Information overload, a result of the ease of information creation and transmission via the Internet and WWW, has become more troublesome (e.g., even stockbrokers and elementary school students, heavily exposed to various WWW search engines, are versed in such IR terminology as recall and precision). Significant variations in database formats and structures, the richness of information media (text, audio, and video), and an abundance of multilingual information content also have created severe information interoperability problems -- structural interoperability, media interoperability, and multilingual interoperability.
  2. Dunning, T.: Statistical identification of language (1994) 0.05
    0.053472333 = product of:
      0.106944665 = sum of:
        0.0110484185 = weight(_text_:for in 3627) [ClassicSimilarity], result of:
          0.0110484185 = score(doc=3627,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.12446466 = fieldWeight in 3627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=3627)
        0.095896244 = weight(_text_:computing in 3627) [ClassicSimilarity], result of:
          0.095896244 = score(doc=3627,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.36668807 = fieldWeight in 3627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=3627)
      0.5 = coord(2/4)
    
    Abstract
    A statistically based program has been written which learns to distinguish between languages. The amount of training text that such a program needs is surprisingly small, and the amount of text needed to make an identification is also quite small. The program incorporates no linguistic presuppositions other than the assumption that text can be encoded as a string of bytes. Such a program can be used to determine which language small bits of text are in. It also shows a potential for what might be called 'statistical philology' in that it may be applied directly to phonetic transcriptions to help elucidate family trees among language dialects. A variant of this program has been shown to be useful as a quality control in biochemistry. In this application, genetic sequences are assumed to be expressions in a language peculiar to the organism from which the sequence is taken. Thus language identification becomes species identification.
    Series
    Technical report CRL MCCS-94-273, Computing Research Lab, New Mexico State University
  3. Page, A.: ¬The search is over : the search-engines secrets of the pros (1996) 0.04
    0.039956767 = product of:
      0.15982707 = sum of:
        0.15982707 = weight(_text_:computing in 5670) [ClassicSimilarity], result of:
          0.15982707 = score(doc=5670,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.6111468 = fieldWeight in 5670, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.078125 = fieldNorm(doc=5670)
      0.25 = coord(1/4)
    
    Source
    PC computing. 1996, Oct., S. -
  4. Arms, W.Y.; Blanchi, C.; Overly, E.A.: ¬An architecture for information in digital libraries (1997) 0.04
    0.03913265 = product of:
      0.0782653 = sum of:
        0.022325827 = weight(_text_:for in 1260) [ClassicSimilarity], result of:
          0.022325827 = score(doc=1260,freq=24.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.25150898 = fieldWeight in 1260, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1260)
        0.055939477 = weight(_text_:computing in 1260) [ClassicSimilarity], result of:
          0.055939477 = score(doc=1260,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.21390139 = fieldWeight in 1260, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1260)
      0.5 = coord(2/4)
    
    Abstract
    Flexible organization of information is one of the key design challenges in any digital library. For the past year, we have been working with members of the National Digital Library Project (NDLP) at the Library of Congress to build an experimental system to organize and store library collections. This is a report on the work. In particular, we describe how a few technical building blocks are used to organize the material in collections, such as the NDLP's, and how these methods fit into a general distributed computing framework. The technical building blocks are part of a framework that evolved as part of the Computer Science Technical Reports Project (CSTR). This framework is described in the paper, "A Framework for Distributed Digital Object Services", by Robert Kahn and Robert Wilensky (1995). The main building blocks are: "digital objects", which are used to manage digital material in a networked environment; "handles", which identify digital objects and other network resources; and "repositories", in which digital objects are stored. These concepts are amplified in "Key Concepts in the Architecture of the Digital Library", by William Y. Arms (1995). In summer 1995, after earlier experimental development, work began on the implementation of a full digital library system based on this framework. In addition to Kahn/Wilensky and Arms, several working papers further elaborate on the design concepts. A paper by Carl Lagoze and David Ely, "Implementation Issues in an Open Architectural Framework for Digital Object Services", delves into some of the repository concepts. The initial repository implementation was based on a paper by Carl Lagoze, Robert McGrath, Ed Overly and Nancy Yeager, "A Design for Inter-Operable Secure Object Stores (ISOS)". Work on the handle system, which began in 1992, is described in a series of papers that can be found on the Handle Home Page. The National Digital Library Program (NDLP) at the Library of Congress is a large scale project to convert historic collections to digital form and make them widely available over the Internet. The program is described in two articles by Caroline R. Arms, "Historical Collections for the National Digital Library". The NDLP itself draws on experience gained through the earlier American Memory Program. Based on this work, we have built a pilot system that demonstrates how digital objects can be used to organize complex materials, such as those found in the NDLP. The pilot was demonstrated to members of the library in July 1996. The pilot system includes the handle system for identifying digital objects, a pilot repository to store them, and two user interfaces: one designed for librarians to manage digital objects in the repository, the other for library patrons to access the materials stored in the repository. Materials from the NDLP's Coolidge Consumerism compilation have been deposited into the pilot repository. They include a variety of photographs and texts, converted to digital form. The pilot demonstrates the use of handles for identifying such material, the use of meta-objects for managing sets of digital objects, and the choice of metadata. We are now implementing an enhanced prototype system for completion in early 1997.
  5. Information retrieval research : Proceedings of the 19th Annual BCS-IRSG Colloquium on IR Research, Aberdeen, Scotland, 8-9 April 1997 (1997) 0.03
    0.031965416 = product of:
      0.12786166 = sum of:
        0.12786166 = weight(_text_:computing in 5393) [ClassicSimilarity], result of:
          0.12786166 = score(doc=5393,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.48891744 = fieldWeight in 5393, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0625 = fieldNorm(doc=5393)
      0.25 = coord(1/4)
    
    Series
    Electronic workshops in computing
  6. Electronic Dewey (1993) 0.02
    0.020176798 = product of:
      0.040353596 = sum of:
        0.014731225 = weight(_text_:for in 1088) [ClassicSimilarity], result of:
          0.014731225 = score(doc=1088,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.16595288 = fieldWeight in 1088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=1088)
        0.025622372 = product of:
          0.051244743 = sum of:
            0.051244743 = weight(_text_:22 in 1088) [ClassicSimilarity], result of:
              0.051244743 = score(doc=1088,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.30952093 = fieldWeight in 1088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1088)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: Cataloging and classification quarterly 19(1994) no.1, S.134-137 (M. Carpenter). - Inzwischen existiert auch eine Windows-Version: 'Electronic Dewey for Windows', vgl. Knowledge organization 22(1995) no.1, S.17
  7. Priss, U.: Faceted knowledge representation (1999) 0.02
    0.017654698 = product of:
      0.035309397 = sum of:
        0.012889821 = weight(_text_:for in 2654) [ClassicSimilarity], result of:
          0.012889821 = score(doc=2654,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14520876 = fieldWeight in 2654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2654)
        0.022419576 = product of:
          0.04483915 = sum of:
            0.04483915 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
              0.04483915 = score(doc=2654,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.2708308 = fieldWeight in 2654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2654)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  8. Priss, U.: Description logic and faceted knowledge representation (1999) 0.02
    0.0174208 = product of:
      0.0348416 = sum of:
        0.015624823 = weight(_text_:for in 2655) [ClassicSimilarity], result of:
          0.015624823 = score(doc=2655,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 2655, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.038433556 = score(doc=2655,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  9. Dunning, A.: Do we still need search engines? (1999) 0.01
    0.011209788 = product of:
      0.04483915 = sum of:
        0.04483915 = product of:
          0.0896783 = sum of:
            0.0896783 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.0896783 = score(doc=6021,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Ariadne. 1999, no.22
  10. Strobel, S.: ¬The complete Linux kit : fully configured LINUX system kernel (1997) 0.01
    0.009608389 = product of:
      0.038433556 = sum of:
        0.038433556 = product of:
          0.07686711 = sum of:
            0.07686711 = weight(_text_:22 in 8959) [ClassicSimilarity], result of:
              0.07686711 = score(doc=8959,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.46428138 = fieldWeight in 8959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=8959)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    16. 7.2002 20:22:55
  11. Birmingham, J.: Internet search engines (1996) 0.01
    0.009608389 = product of:
      0.038433556 = sum of:
        0.038433556 = product of:
          0.07686711 = sum of:
            0.07686711 = weight(_text_:22 in 5664) [ClassicSimilarity], result of:
              0.07686711 = score(doc=5664,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.46428138 = fieldWeight in 5664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5664)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    10.11.1996 16:36:22
  12. Delsey, T.: ¬The logical structure of the Anglo-American cataloguing rules : Drafted for the Joint Steering Committee for Revision of AACR by Tom Delsey ... (1998) 0.01
    0.00797351 = product of:
      0.03189404 = sum of:
        0.03189404 = weight(_text_:for in 3005) [ClassicSimilarity], result of:
          0.03189404 = score(doc=3005,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.35929856 = fieldWeight in 3005, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.078125 = fieldNorm(doc=3005)
      0.25 = coord(1/4)
    
    Imprint
    Ottawa : Joint Steering Committee for Revision of AACR
  13. Dewey for Windows : LC subject authorities issues May 1998 (1998) 0.01
    0.0078124115 = product of:
      0.031249646 = sum of:
        0.031249646 = weight(_text_:for in 423) [ClassicSimilarity], result of:
          0.031249646 = score(doc=423,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.35203922 = fieldWeight in 423, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.09375 = fieldNorm(doc=423)
      0.25 = coord(1/4)
    
    Object
    Dewey for Windows
  14. Woods, E.W.; IFLA Section on classification and Indexing and Indexing and Information Technology; Joint Working Group on a Classification Format: Requirements for a format of classification data : Final report, July 1996 (1996) 0.01
    0.0078124115 = product of:
      0.031249646 = sum of:
        0.031249646 = weight(_text_:for in 3008) [ClassicSimilarity], result of:
          0.031249646 = score(doc=3008,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.35203922 = fieldWeight in 3008, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.09375 = fieldNorm(doc=3008)
      0.25 = coord(1/4)
    
    Object
    USMARC for classification data
  15. Weibel, S.: ¬A proposed convention for embedding metadata in HTML <June 2, 1996> (1996) 0.01
    0.0073656123 = product of:
      0.02946245 = sum of:
        0.02946245 = weight(_text_:for in 5971) [ClassicSimilarity], result of:
          0.02946245 = score(doc=5971,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.33190575 = fieldWeight in 5971, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.125 = fieldNorm(doc=5971)
      0.25 = coord(1/4)
    
  16. Weibel, S.: Metadata: the foundations for resource description (1995) 0.01
    0.0073656123 = product of:
      0.02946245 = sum of:
        0.02946245 = weight(_text_:for in 5973) [ClassicSimilarity], result of:
          0.02946245 = score(doc=5973,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.33190575 = fieldWeight in 5973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.125 = fieldNorm(doc=5973)
      0.25 = coord(1/4)
    
  17. Miller, P.: Metadata for the masses (1995) 0.01
    0.0073656123 = product of:
      0.02946245 = sum of:
        0.02946245 = weight(_text_:for in 5974) [ClassicSimilarity], result of:
          0.02946245 = score(doc=5974,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.33190575 = fieldWeight in 5974, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.125 = fieldNorm(doc=5974)
      0.25 = coord(1/4)
    
  18. Sowards, S.W.: ¬A typology for ready reference Web sites in libraries (1996) 0.01
    0.0073656123 = product of:
      0.02946245 = sum of:
        0.02946245 = weight(_text_:for in 944) [ClassicSimilarity], result of:
          0.02946245 = score(doc=944,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.33190575 = fieldWeight in 944, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=944)
      0.25 = coord(1/4)
    
    Abstract
    Many libraries manage Web sites intended to provide their users with online resources suitable for answering reference questions. Most of these sites can be analyzed in terms of their depth, and their organizing and searching features. Composing a typology based on these factors sheds light on the critical design decisions that influence whether users of these sites succees or fail to find information easily, rapidly and accurately. The same analysis highlights some larger design issues, both for Web sites and for information management at large
  19. Shneiderman, B.; Byrd, D.; Croft, W.B.: Clarifying search : a user-interface framework for text searches (1997) 0.01
    0.0073656123 = product of:
      0.02946245 = sum of:
        0.02946245 = weight(_text_:for in 1471) [ClassicSimilarity], result of:
          0.02946245 = score(doc=1471,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.33190575 = fieldWeight in 1471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.125 = fieldNorm(doc=1471)
      0.25 = coord(1/4)
    
  20. Shneiderman, B.; Byrd, D.; Croft, W.B.: Clarifying search : a user-interface framework for text searches (1997) 0.01
    0.0073656123 = product of:
      0.02946245 = sum of:
        0.02946245 = weight(_text_:for in 1258) [ClassicSimilarity], result of:
          0.02946245 = score(doc=1258,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.33190575 = fieldWeight in 1258, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=1258)
      0.25 = coord(1/4)
    
    Abstract
    Current user interfaces for textual database searching leave much to be desired: individually, they are often confusing, and as a group, they are seriously inconsistent. We propose a four- phase framework for user-interface design: the framework provides common structure and terminology for searching while preserving the distinct features of individual collections and search mechanisms. Users will benefit from faster learning, increased comprehension, and better control, leading to more effective searches and higher satisfaction.

Authors

Types

  • a 40
  • m 3
  • r 3
  • i 2
  • b 1
  • n 1
  • s 1
  • More… Less…

Classifications