Search (32 results, page 1 of 2)

  • × language_ss:"e"
  • × type_ss:"el"
  • × year_i:[1990 TO 2000}
  1. Priss, U.: Description logic and faceted knowledge representation (1999) 0.05
    0.049504973 = product of:
      0.099009946 = sum of:
        0.099009946 = sum of:
          0.056588627 = weight(_text_:systems in 2655) [ClassicSimilarity], result of:
            0.056588627 = score(doc=2655,freq=6.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.35286134 = fieldWeight in 2655, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
          0.042421322 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
            0.042421322 = score(doc=2655,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.23214069 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
      0.5 = coord(1/2)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  2. Priss, U.: Faceted knowledge representation (1999) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 2654) [ClassicSimilarity], result of:
            0.038116705 = score(doc=2654,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
          0.049491543 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.049491543 = score(doc=2654,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
      0.5 = coord(1/2)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  3. Dunning, A.: Do we still need search engines? (1999) 0.02
    0.024745772 = product of:
      0.049491543 = sum of:
        0.049491543 = product of:
          0.09898309 = sum of:
            0.09898309 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.09898309 = score(doc=6021,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Ariadne. 1999, no.22
  4. Strobel, S.: ¬The complete Linux kit : fully configured LINUX system kernel (1997) 0.02
    0.021210661 = product of:
      0.042421322 = sum of:
        0.042421322 = product of:
          0.084842645 = sum of:
            0.084842645 = weight(_text_:22 in 8959) [ClassicSimilarity], result of:
              0.084842645 = score(doc=8959,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.46428138 = fieldWeight in 8959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=8959)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 7.2002 20:22:55
  5. Birmingham, J.: Internet search engines (1996) 0.02
    0.021210661 = product of:
      0.042421322 = sum of:
        0.042421322 = product of:
          0.084842645 = sum of:
            0.084842645 = weight(_text_:22 in 5664) [ClassicSimilarity], result of:
              0.084842645 = score(doc=5664,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.46428138 = fieldWeight in 5664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5664)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10.11.1996 16:36:22
  6. Landry, P.: Subject cataloguing in Switzerland : From multiple subject systems to an eventual transparent multilingual subject access? (1997) 0.02
    0.019058352 = product of:
      0.038116705 = sum of:
        0.038116705 = product of:
          0.07623341 = sum of:
            0.07623341 = weight(_text_:systems in 412) [ClassicSimilarity], result of:
              0.07623341 = score(doc=412,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.47535738 = fieldWeight in 412, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.109375 = fieldNorm(doc=412)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Faceted classifications and thesauri (1995) 0.02
    0.01633573 = product of:
      0.03267146 = sum of:
        0.03267146 = product of:
          0.06534292 = sum of:
            0.06534292 = weight(_text_:systems in 3182) [ClassicSimilarity], result of:
              0.06534292 = score(doc=3182,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.4074492 = fieldWeight in 3182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3182)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Throughout history as collections grew, classification systems were developed to organize materials. This paper will describe the development and use of 2 controlled vocabularies, faceted classifications and thesauri
  8. Croft, W.B.: What do people want from information retrieval? : the top 10 research issues for companies that use and sell IR systems (1995) 0.02
    0.01633573 = product of:
      0.03267146 = sum of:
        0.03267146 = product of:
          0.06534292 = sum of:
            0.06534292 = weight(_text_:systems in 3402) [ClassicSimilarity], result of:
              0.06534292 = score(doc=3402,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.4074492 = fieldWeight in 3402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3402)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. McDonough, J.P.: Epistemic engineering : some implications of the sociology of knowledge for information systems design (1994) 0.02
    0.015401474 = product of:
      0.030802948 = sum of:
        0.030802948 = product of:
          0.061605897 = sum of:
            0.061605897 = weight(_text_:systems in 3184) [ClassicSimilarity], result of:
              0.061605897 = score(doc=3184,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.38414678 = fieldWeight in 3184, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3184)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    For digital information systems' design to continue to flourish, it would appear that we may need to incorporate a view of the social in our design efforts. While there are a variety of disciplines and viewpoints within the social sciences which might assist information system designers in an effort to achieve a wider perspective in design, the remainder of this paper will focus on one that seems particularly well-suited for this role, the sociology of knowledge
  10. Information retrieval research : Proceedings of the 19th Annual BCS-IRSG Colloquium on IR Research, Aberdeen, Scotland, 8-9 April 1997 (1997) 0.02
    0.015401474 = product of:
      0.030802948 = sum of:
        0.030802948 = product of:
          0.061605897 = sum of:
            0.061605897 = weight(_text_:systems in 5393) [ClassicSimilarity], result of:
              0.061605897 = score(doc=5393,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.38414678 = fieldWeight in 5393, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5393)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    LCSH
    Information storage and retrieval systems / Research / Congresses
    Subject
    Information storage and retrieval systems / Research / Congresses
  11. Chen, H.: Semantic research for digital libraries (1999) 0.01
    0.014147157 = product of:
      0.028294314 = sum of:
        0.028294314 = product of:
          0.056588627 = sum of:
            0.056588627 = weight(_text_:systems in 1247) [ClassicSimilarity], result of:
              0.056588627 = score(doc=1247,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.35286134 = fieldWeight in 1247, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1247)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this era of the Internet and distributed, multimedia computing, new and emerging classes of information systems applications have swept into the lives of office workers and people in general. From digital libraries, multimedia systems, geographic information systems, and collaborative computing to electronic commerce, virtual reality, and electronic video arts and games, these applications have created tremendous opportunities for information and computer science researchers and practitioners. As applications become more pervasive, pressing, and diverse, several well-known information retrieval (IR) problems have become even more urgent. Information overload, a result of the ease of information creation and transmission via the Internet and WWW, has become more troublesome (e.g., even stockbrokers and elementary school students, heavily exposed to various WWW search engines, are versed in such IR terminology as recall and precision). Significant variations in database formats and structures, the richness of information media (text, audio, and video), and an abundance of multilingual information content also have created severe information interoperability problems -- structural interoperability, media interoperability, and multilingual interoperability.
  12. Plotkin, R.C.; Schwartz, M.S.: Data modeling for news clip archive : a prototype solution (1997) 0.01
    0.014147157 = product of:
      0.028294314 = sum of:
        0.028294314 = product of:
          0.056588627 = sum of:
            0.056588627 = weight(_text_:systems in 1259) [ClassicSimilarity], result of:
              0.056588627 = score(doc=1259,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.35286134 = fieldWeight in 1259, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1259)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Film, videotape and multimedia archive systems must address the issues of editing, authoring and searching at the media (i.e. tape) or sub media (i.e. scene) level in addition to the traditional inventory management capabilities associated with the physical media. This paper describes a prototype of a database design for the storage, search and retrieval of multimedia and its related information. It also provides a process by which legacy data can be imported to this schema. The Continuous Media Index, or Comix system is the name of the prototype. An implementation of such a digital library solution incorporates multimedia objects, hierarchical relationships and timecode in addition to traditional attribute data. Present video and multimedia archive systems are easily migrated to this architecture. Comix was implemented for a videotape archiving system. It was written for, and implemented using IBM Digital Library version 1.0. A derivative of Comix is currently in development for customer specific applications. Principles of the Comix design as well as the importation methods are not specific to the underlying systems used.
  13. Electronic Dewey (1993) 0.01
    0.014140441 = product of:
      0.028280882 = sum of:
        0.028280882 = product of:
          0.056561764 = sum of:
            0.056561764 = weight(_text_:22 in 1088) [ClassicSimilarity], result of:
              0.056561764 = score(doc=1088,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.30952093 = fieldWeight in 1088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1088)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: Cataloging and classification quarterly 19(1994) no.1, S.134-137 (M. Carpenter). - Inzwischen existiert auch eine Windows-Version: 'Electronic Dewey for Windows', vgl. Knowledge organization 22(1995) no.1, S.17
  14. Hesse, W.; Verrijn-Stuart, A.: Towards a theory of information systems : the FRISCO approach (1999) 0.01
    0.011789299 = product of:
      0.023578597 = sum of:
        0.023578597 = product of:
          0.047157194 = sum of:
            0.047157194 = weight(_text_:systems in 3059) [ClassicSimilarity], result of:
              0.047157194 = score(doc=3059,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.29405114 = fieldWeight in 3059, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information Systems (IS) is among the most widespread terms in the Computer Science field but a well founded, widely accepted theory of IS is still missing. With the Internet publication of the FRISCO report, the IFIP task group "FRamework of Information System COncepts" has taken a first step towards such a theory. Among the major achievements of this report are: (1) it builds on a solid basis formed by semiotics and ontology, (2) it defines a compendium of about 100 core IS concepts in a coherent and consistent way, (3) it goes beyond the common narrow view of information systems as pure technical artefacts by adopting an interdisciplinary, socio-technical view on them. In the autumn of 1999, a first review of the report and its impact was undertaken at the ISCO-4 conference in Leiden. In a workshop specifically devoted to the subject, the original aims and goals of FRISCO were confirmed to be still valid and the overall approach and achievements of the report were acknowledged. On the other hand, the workshop revealed some misconceptions, errors and weaknesses of the report in its present form, which are to be removed through a comprehensive revision now under way. This paper reports on the results of the Leiden conference and the current revision activities. It also points out some important consequences of the FRISCO approach as a whole.
  15. Swartout, B.; Patil, R.; Knight, K.; Russ, T.: Toward Distributed Use of Large-Scale Ontologies (1996) 0.01
    0.011551105 = product of:
      0.02310221 = sum of:
        0.02310221 = product of:
          0.04620442 = sum of:
            0.04620442 = weight(_text_:systems in 4961) [ClassicSimilarity], result of:
              0.04620442 = score(doc=4961,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.28811008 = fieldWeight in 4961, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4961)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Large scale knowledge bases systems are difficult and expensive to construct. If we could share knowledge across systems, costs would be reduced. However, because knowledge bases are typically constructed from scratch, each with their own idiosyncratic structure, sharing is difficult. Recent research has focused on the use of ontologies to promote sharing. An ontology is a hierarchically structured set of terms for describing a domain that can be used as a skeletal foundation for a knowledge base. If two knowledge bases are built on a common ontology, knowledge can be more readily shared, since they share a common underlying structure. This paper outlines a set of desiderata for ontologies, and then describes how we have used a large-scale (50,000+ concept) ontology to develop a specialized, domain-specific ontology semi-automatically. We then discuss the relation between ontologies and the process of developing a system, arguing that to be useful, an ontology needs to be created as a "living document", whose development is tightly integrated with the system's. We conclude with a discussion of Web-based ontology tools we are developing to support this approach
  16. Jing, Y.; Croft, W.B.: ¬An association thesaurus for information retrieval (199?) 0.01
    0.009529176 = product of:
      0.019058352 = sum of:
        0.019058352 = product of:
          0.038116705 = sum of:
            0.038116705 = weight(_text_:systems in 4494) [ClassicSimilarity], result of:
              0.038116705 = score(doc=4494,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23767869 = fieldWeight in 4494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4494)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Although commonly used in both commercial and experimental information retrieval systems, thesauri have not demonstrated consistent benefits for retrieval performance, and it is difficult to construct a thesaurus automatically for large text databases. In this paper, an approach, called PhraseFinder, is proposed to construct collection-dependent association thesauri automatically using large full-text document collections. The association thesaurus can be accessed through natural language queries in INQUERY, an information retrieval system based on the probabilistic inference network. Experiments are conducted in INQUERY to evaluate different types of association thesauri, and thesauri constructed for a variety of collections
  17. Lackes, R.; Mack, D.: Computer Based Training on neural nets : Basics, development, and practice (1998) 0.01
    0.009529176 = product of:
      0.019058352 = sum of:
        0.019058352 = product of:
          0.038116705 = sum of:
            0.038116705 = weight(_text_:systems in 964) [ClassicSimilarity], result of:
              0.038116705 = score(doc=964,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23767869 = fieldWeight in 964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=964)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Here is an interactive introduction to neural nets and how to apply them that is easy to understand and use. Neural nets are information processing systems that mimic the basic structure of the human brain. They learn by adjusting the interaction of their individual components (neurons). A neural net can learn from patterns of information supplied as input to generate useful output that can serve as a basis for decision making. Numerous multimedia and interactive components give the learning program an almost game-like feel as it takes the learner from the basics to the use of neural nets for real projects
  18. Oard, D.W.: Alternative approaches for cross-language text retrieval (1997) 0.01
    0.009529176 = product of:
      0.019058352 = sum of:
        0.019058352 = product of:
          0.038116705 = sum of:
            0.038116705 = weight(_text_:systems in 1164) [ClassicSimilarity], result of:
              0.038116705 = score(doc=1164,freq=8.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23767869 = fieldWeight in 1164, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The explosive growth of the Internet and other sources of networked information have made automatic mediation of access to networked information sources an increasingly important problem. Much of this information is expressed as electronic text, and it is becoming practical to automatically convert some printed documents and recorded speech to electronic text as well. Thus, automated systems capable of detecting useful documents are finding widespread application. With even a small number of languages it can be inconvenient to issue the same query repeatedly in every language, so users who are able to read more than one language will likely prefer a multilingual text retrieval system over a collection of monolingual systems. And since reading ability in a language does not always imply fluent writing ability in that language, such users will likely find cross-language text retrieval particularly useful for languages in which they are less confident of their ability to express their information needs effectively. The use of such systems can be also be beneficial if the user is able to read only a single language. For example, when only a small portion of the document collection will ever be examined by the user, performing retrieval before translation can be significantly more economical than performing translation before retrieval. So when the application is sufficiently important to justify the time and effort required for translation, those costs can be minimized if an effective cross-language text retrieval system is available. Even when translation is not available, there are circumstances in which cross-language text retrieval could be useful to a monolingual user. For example, a researcher might find a paper published in an unfamiliar language useful if that paper contains references to works by the same author that are in the researcher's native language.
    Multilingual text retrieval can be defined as selection of useful documents from collections that may contain several languages (English, French, Chinese, etc.). This formulation allows for the possibility that individual documents might contain more than one language, a common occurrence in some applications. Both cross-language and within-language retrieval are included in this formulation, but it is the cross-language aspect of the problem which distinguishes multilingual text retrieval from its well studied monolingual counterpart. At the SIGIR 96 workshop on "Cross-Linguistic Information Retrieval" the participants discussed the proliferation of terminology being used to describe the field and settled on "Cross-Language" as the best single description of the salient aspect of the problem. "Multilingual" was felt to be too broad, since that term has also been used to describe systems able to perform within-language retrieval in more than one language but that lack any cross-language capability. "Cross-lingual" and "cross-linguistic" were felt to be equally good descriptions of the field, but "crosslanguage" was selected as the preferred term in the interest of standardization. Unfortunately, at about the same time the U.S. Defense Advanced Research Projects Agency (DARPA) introduced "translingual" as their preferred term, so we are still some distance from reaching consensus on this matter.
  19. Yee, M.M.: Guidelines for OPAC displays : prepared for the IFLA Task Force on Guidelines for OPAC Displays (1998) 0.01
    0.0094314385 = product of:
      0.018862877 = sum of:
        0.018862877 = product of:
          0.037725754 = sum of:
            0.037725754 = weight(_text_:systems in 5069) [ClassicSimilarity], result of:
              0.037725754 = score(doc=5069,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2352409 = fieldWeight in 5069, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5069)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Several studies on OPACs have been made since the early 1980s. However, OPAC development has been governed by systems designers, bibliographic networks and technical services librarians, but not necessarily according to user needs. Existing OPACs demonstrate differences, for example, in the range and complexity of their functional features, terminology and help facilities. While many libraries have already established their own OPACs, there is a need to bring together in the form of guidelines or recommendations a corpus of good practice to assist libraries to design or re-design their OPACs.
    As mentioned above, the guidelines are intended to apply to all types of catalogue, including Web-based catalogues, GUI-based interfaces, and Z39.50-web interfaces. The focus of the guidelines is on the display of cataloguing information (as opposed to circulation, serials check-in, fund accounting, acquisitions, or bindery information). However, some general statements are made concerning the value of displaying to users information that is drawn from these other types of records. The guidelines do not attempt to cover HELP screens, searching methods, or command names and functions. Thus, the guidelines do not directly address the difference between menu-mode access (so common now in GUI and Web interfaces) vs. command-mode access (often completely unavailable in GUI and Web interfaces). However, note that in menu-mode access, the user often has to go through many more screens to attain results than in command-mode access, and each of these screens constitutes a display. The intent is to recommend a standard set of display defaults, defined as features that should be provided for users who have not selected other options, including users who want to begin searching right away without much instruction. It is not the intent to restrict the creativity of system designers who want to build in further options to offer to advanced users (beyond the defaults), advanced users being those people who are willing to put some time into learning how to use the system in more sophisticated and complex ways. The Task Force is aware of the fact that many existing systems are not capable of following all of the recommendations in this document. We hope that existing systems will attempt to work toward the implementation of the guidelines as they develop new versions of their software in the future.
  20. Spink, A.; Wilson, T.; Ellis, D.; Ford, N.: Modeling users' successive searches in digital environments : a National Science Foundation/British Library funded study (1998) 0.01
    0.008252509 = product of:
      0.016505018 = sum of:
        0.016505018 = product of:
          0.033010036 = sum of:
            0.033010036 = weight(_text_:systems in 1255) [ClassicSimilarity], result of:
              0.033010036 = score(doc=1255,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.20583579 = fieldWeight in 1255, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1255)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As digital libraries become a major source of information for many people, we need to know more about how people seek and retrieve information in digital environments. Quite commonly, users with a problem-at-hand and associated question-in-mind repeatedly search a literature for answers, and seek information in stages over extended periods from a variety of digital information resources. The process of repeatedly searching over time in relation to a specific, but possibly an evolving information problem (including changes or shifts in a variety of variables), is called the successive search phenomenon. The study outlined in this paper is currently investigating this new and little explored line of inquiry for information retrieval, Web searching, and digital libraries. The purpose of the research project is to investigate the nature, manifestations, and behavior of successive searching by users in digital environments, and to derive criteria for use in the design of information retrieval interfaces and systems supporting successive searching behavior. This study includes two related projects. The first project is based in the School of Library and Information Sciences at the University of North Texas and is funded by a National Science Foundation POWRE Grant <http://www.nsf.gov/cgi-bin/show?award=9753277>. The second project is based at the Department of Information Studies at the University of Sheffield (UK) and is funded by a grant from the British Library <http://www.shef. ac.uk/~is/research/imrg/uncerty.html> Research and Innovation Center. The broad objectives of each project are to examine the nature and extent of successive search episodes in digital environments by real users over time. The specific aim of the current project is twofold: * To characterize progressive changes and shifts that occur in: user situational context; user information problem; uncertainty reduction; user cognitive styles; cognitive and affective states of the user, and consequently in their queries; and * To characterize related changes over time in the type and use of information resources and search strategies particularly related to given capabilities of IR systems, and IR search engines, and examine changes in users' relevance judgments and criteria, and characterize their differences. The study is an observational, longitudinal data collection in the U.S. and U.K. Three questionnaires are used to collect data: reference, client post search and searcher post search questionnaires. Each successive search episode with a search intermediary for textual materials on the DIALOG Information Service is audiotaped and search transaction logs are recorded. Quantitative analysis includes statistical analysis using Likert scale data from the questionnaires and log-linear analysis of sequential data. Qualitative methods include: content analysis, structuring taxonomies; and diagrams to describe shifts and transitions within and between each search episode. Outcomes of the study are the development of appropriate model(s) for IR interactions in successive search episodes and the derivation of a set of design criteria for interfaces and systems supporting successive searching.

Types