Search (1565 results, page 1 of 79)

  • × year_i:[1990 TO 2000}
  1. Seruga, J.: Object-oriented modeling of a library information system (1997) 0.06
    0.05689327 = product of:
      0.17067981 = sum of:
        0.17067981 = sum of:
          0.11958151 = weight(_text_:methodology in 8477) [ClassicSimilarity], result of:
            0.11958151 = score(doc=8477,freq=4.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.5630881 = fieldWeight in 8477, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0625 = fieldNorm(doc=8477)
          0.051098287 = weight(_text_:22 in 8477) [ClassicSimilarity], result of:
            0.051098287 = score(doc=8477,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.30952093 = fieldWeight in 8477, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=8477)
      0.33333334 = coord(1/3)
    
    Abstract
    Analyses the OPAC at the Australian Catholic University in New South Wales, Castle Hill Campus using an object oriented model following Rumbaugh's methodology, as described in 'Object oriented modelling and design, 1991'. The process of analysis, although difficult, is one of the most effective ways of determining each function of a system of this kind. The methodology is especially useful as the data structure, behavioural and functional aspects of the system are displayed in separate diagrams. This is an advantage for those analysing systems, who can display many factors without confusing different aspects involved in the analysis process
    Source
    LASIE. 28(1997) no.4, S.22-34
  2. Parer, D.; Parrott, K.: Management practices in the electronic records environment (1994) 0.06
    0.056522995 = product of:
      0.16956899 = sum of:
        0.16956899 = sum of:
          0.10569612 = weight(_text_:methodology in 1000) [ClassicSimilarity], result of:
            0.10569612 = score(doc=1000,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.49770427 = fieldWeight in 1000, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.078125 = fieldNorm(doc=1000)
          0.06387286 = weight(_text_:22 in 1000) [ClassicSimilarity], result of:
            0.06387286 = score(doc=1000,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.38690117 = fieldWeight in 1000, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=1000)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes 3 records management approaches to electronic records and assesses the archival interests involved in each. Suggests utilizing the Information Management methodology to devise an organization wide Information Management Plan, incorporating records management and archival requirements, to facilitate the identification of records of value to the organization to be managed as any other corporate asset
    Source
    Archives and manuscripts. 22(1994) no.1, S.106-122
  3. Neelameghan, A.: Application of S.R. Ranganathan's postulates and principles of the general theory of knowledge classification to database design and information retrieval (1993) 0.05
    0.045218393 = product of:
      0.13565518 = sum of:
        0.13565518 = sum of:
          0.08455689 = weight(_text_:methodology in 6740) [ClassicSimilarity], result of:
            0.08455689 = score(doc=6740,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.3981634 = fieldWeight in 6740, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0625 = fieldNorm(doc=6740)
          0.051098287 = weight(_text_:22 in 6740) [ClassicSimilarity], result of:
            0.051098287 = score(doc=6740,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.30952093 = fieldWeight in 6740, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6740)
      0.33333334 = coord(1/3)
    
    Abstract
    Discusses Ranganathan's holistic integrative approach and applications of analytico-methodology. Deals with databases that art mostly object-oriented, factual information bases. Defines data entity, data model and subject. Covers the organization of ides in specialized databases; a generalized subject structure model; designing specialized databases; indexes and searches and user interaction with the system
    Source
    International cataloguing and bibliographic control. 22(1993) no.3, S.46-50
  4. Jascó, P.; Tiszai, J.: Now featuring ... movie databases : Pt.1: get the popcorn! Pt.2: the software (1995) 0.05
    0.045218393 = product of:
      0.13565518 = sum of:
        0.13565518 = sum of:
          0.08455689 = weight(_text_:methodology in 1789) [ClassicSimilarity], result of:
            0.08455689 = score(doc=1789,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.3981634 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0625 = fieldNorm(doc=1789)
          0.051098287 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.051098287 = score(doc=1789,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.30952093 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1789)
      0.33333334 = coord(1/3)
    
    Abstract
    Review of film directories databases available online, on CD-ROM and diskette. Examines the content of these databases. Explains the methodology used in the study and analyzes: size and composition, database coverage, and record content looking at bibliographic identification data, cast and credit information, classification information, and evaluative information
    Source
    Database. 18(1995) no.1, S.22-32 (pt.1); no.2, S.29-39 (pt.2)
  5. Cochenour, D.: Linking remote users and information : cataloguing Internet publications (1994) 0.05
    0.045218393 = product of:
      0.13565518 = sum of:
        0.13565518 = sum of:
          0.08455689 = weight(_text_:methodology in 2170) [ClassicSimilarity], result of:
            0.08455689 = score(doc=2170,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.3981634 = fieldWeight in 2170, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0625 = fieldNorm(doc=2170)
          0.051098287 = weight(_text_:22 in 2170) [ClassicSimilarity], result of:
            0.051098287 = score(doc=2170,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.30952093 = fieldWeight in 2170, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2170)
      0.33333334 = coord(1/3)
    
    Abstract
    Libraries can add value to Internet resources by adding them to the library's catalogue in a manner consistent with the other resources held within the collection. Reports on OCLC studies into cataloguing Internet resources and accessing electronic periodicals. Existing retrieval methods on the Internet are limited because of shallow directory structures and idiosyncratic naming conventions. Catalogue entries for electronic resources need to provide a complete description of the access methodology if they are to satisfactorily connect remote users without the immediate possibility of backup from reference staff
    Date
    17.10.1995 18:22:54
  6. Satija, M.P.: Birth centenary literature on Ranganathan : a review (1993) 0.05
    0.045218393 = product of:
      0.13565518 = sum of:
        0.13565518 = sum of:
          0.08455689 = weight(_text_:methodology in 2518) [ClassicSimilarity], result of:
            0.08455689 = score(doc=2518,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.3981634 = fieldWeight in 2518, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0625 = fieldNorm(doc=2518)
          0.051098287 = weight(_text_:22 in 2518) [ClassicSimilarity], result of:
            0.051098287 = score(doc=2518,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.30952093 = fieldWeight in 2518, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2518)
      0.33333334 = coord(1/3)
    
    Abstract
    Discusses the books and articles written to commemmorate the centenary of the birth of S.R. Ranganathan in 1992. 9 books were published for the occasion and 6 special issues of journals; in addition articles about Ranganathan appeared in at least 10 other periodicals. Topics covered included Ranganathan's biography, his research methodology, his influence on classification and library science, and evaluations of his work
    Date
    5. 1.1999 16:27:22
  7. Gonzalez, A.C.: Analisis y diseno de sistemas de gestion electronica de documentacion en grandes entidades (1997) 0.05
    0.045218393 = product of:
      0.13565518 = sum of:
        0.13565518 = sum of:
          0.08455689 = weight(_text_:methodology in 2923) [ClassicSimilarity], result of:
            0.08455689 = score(doc=2923,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.3981634 = fieldWeight in 2923, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0625 = fieldNorm(doc=2923)
          0.051098287 = weight(_text_:22 in 2923) [ClassicSimilarity], result of:
            0.051098287 = score(doc=2923,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.30952093 = fieldWeight in 2923, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2923)
      0.33333334 = coord(1/3)
    
    Abstract
    The successful implementation of Electronic Document Management Systems (EDMS) requires a previous design based on a methodology that includes key steps as follows: capture of critical information and analysis of the current document situation: functional and/or technical options that involve the treatment of the document fonds considered; document management applications design (data, text, images, audio, video) under a functional, technical and economic focus; global and modular project defined as a strategic EDMS plan
    Date
    11. 2.1999 21:02:22
  8. Murray, I.: Funding access to the Internet in public libraries : a review article (1998) 0.05
    0.045218393 = product of:
      0.13565518 = sum of:
        0.13565518 = sum of:
          0.08455689 = weight(_text_:methodology in 3529) [ClassicSimilarity], result of:
            0.08455689 = score(doc=3529,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.3981634 = fieldWeight in 3529, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0625 = fieldNorm(doc=3529)
          0.051098287 = weight(_text_:22 in 3529) [ClassicSimilarity], result of:
            0.051098287 = score(doc=3529,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.30952093 = fieldWeight in 3529, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3529)
      0.33333334 = coord(1/3)
    
    Abstract
    The decision to provide access to the Internet for members of the public raises some issues, in particular how a service is to be funded. Gives examples of current practice and draws broad conclusions from a sample of opinions voiced by members of the public. It puts forward s SWOT methodology as one possible approach to assist in identifying the crucial factors in the implementation of an Internet public access service emphasising what factors must be obtained to result in a cost effective service
    Date
    8. 5.1999 19:48:22
  9. Meadow, C.T.: Speculations on the measurement and use of user characteristics in information retrieval experimentation (1994) 0.04
    0.039566096 = product of:
      0.118698284 = sum of:
        0.118698284 = sum of:
          0.07398728 = weight(_text_:methodology in 1795) [ClassicSimilarity], result of:
            0.07398728 = score(doc=1795,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.348393 = fieldWeight in 1795, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1795)
          0.044711 = weight(_text_:22 in 1795) [ClassicSimilarity], result of:
            0.044711 = score(doc=1795,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.2708308 = fieldWeight in 1795, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1795)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a recently composite view of several user studies in information retrieval. Contains personal conclusions and speculations based on these studies, rather than formal statistical results, which so often are not comparable from 1 experiment to another. Suggests a taxonomy of user characteristics for such studies, in order to make results comparable. Discusses methods and effects of user training, then manner of expression of a query or information need, conduct of a search, use of the system command language or its equivalent, analysis by the user of retrieved information, and user satisfaction with outcome. Concludes with suggestions for system design and experimental methodology
    Source
    Canadian journal of information and library science. 19(1994) no.4, S.1-22
  10. Takahashi, K.; Liang, E.: Analysis and design of Web-based information systems (1997) 0.04
    0.039566096 = product of:
      0.118698284 = sum of:
        0.118698284 = sum of:
          0.07398728 = weight(_text_:methodology in 2741) [ClassicSimilarity], result of:
            0.07398728 = score(doc=2741,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.348393 = fieldWeight in 2741, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2741)
          0.044711 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
            0.044711 = score(doc=2741,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.2708308 = fieldWeight in 2741, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2741)
      0.33333334 = coord(1/3)
    
    Abstract
    Develops a method for analysis and design of web-based information systems (WBIs), and tools to support the method. WebArchitect and PilotBoat. Aims to effiently develop WBIs that best support particular business processes at least maintenance cost. It consists of 2 approaches: static and dynamic. Uses the entity relation (E-R) approach for the static aspects of WBIs and uses scenario approaches for the dynamic aspects. The E-R analysis and design, based on relationship management methodology (RMM) defines what are entities and how they are related. Applies the approaches the the WWW6 proceedings site
    Date
    1. 8.1996 22:08:06
  11. Parker, V.: Cataloguing map series and serials (1999) 0.04
    0.039566096 = product of:
      0.118698284 = sum of:
        0.118698284 = sum of:
          0.07398728 = weight(_text_:methodology in 5323) [ClassicSimilarity], result of:
            0.07398728 = score(doc=5323,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.348393 = fieldWeight in 5323, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5323)
          0.044711 = weight(_text_:22 in 5323) [ClassicSimilarity], result of:
            0.044711 = score(doc=5323,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.2708308 = fieldWeight in 5323, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5323)
      0.33333334 = coord(1/3)
    
    Abstract
    This article defines and outlines the characteristics of map series, map sets, map serials, maps in multiple editions and multi-sheet single maps. Brief instructions on sources of information and general methodology used in gathering information prior to creating the entry are presented. The different methods which may be used for cataloguing series and serials are explored. There is also a brief section on cataloguing bi- and multi-lingual works in a bilingual environment. For each relevant area of description, instructions and examples are given to illustrate problems. Sections on analysis (including multi-level cataloguing).
    Date
    26. 7.2006 10:44:22
  12. Olsson, M.: Discourse : a new theoretical framework for examining information behaviour in its social context (1999) 0.04
    0.03555829 = product of:
      0.10667487 = sum of:
        0.10667487 = sum of:
          0.07473844 = weight(_text_:methodology in 295) [ClassicSimilarity], result of:
            0.07473844 = score(doc=295,freq=4.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.35193008 = fieldWeight in 295, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0390625 = fieldNorm(doc=295)
          0.03193643 = weight(_text_:22 in 295) [ClassicSimilarity], result of:
            0.03193643 = score(doc=295,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.19345059 = fieldWeight in 295, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=295)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper outlines a theoretical framework for examining information behaviour of groups, based on the concept of `discourse' first put forward by the French philosopher and historian Michel Foucault. The paper follows Talja (1996), in using discourse as a metatheory to underpin information behaviour research but holds that information behaviour researchers need to find their own methods for exploring the issues raised by discursive theory. The paper begins by outlining some of the major concepts that distinguish this `discourse analytic' approach from existing approaches to group based information behaviour research. The paper also proposes a methodology for examining information behaviour in the context of a discourse analytic approach. This methodology uses the results of author co-citation analysis, as pioneered by White & Griffith (1981), in order to identify discourses within a broad subject area or knowledge field. These discourses are then used as the basis for further study using social network analysis (Haythornthwaite, 1996). The aim of this second stage in the research process is to develop an understanding of the role of discourse in shaping information behaviour through understanding the nature of the relationships within the discourse. The paper puts forward `discourse' as an alternative to current approaches to group-based information behaviour research based on `user' or `target' groups. Many prevailing approaches to information behaviour research can be broadly divided into those that describe rather than theorise about information behaviour and those that seek to explain information behaviour by focusing their theoretical attention on the individual information user.
    Date
    22. 3.2002 9:52:19
  13. Saadoun, A.: ¬A knowledge engineering framework for intelligent retrieval of legal case studies (1997) 0.03
    0.033913795 = product of:
      0.10174138 = sum of:
        0.10174138 = sum of:
          0.063417666 = weight(_text_:methodology in 2745) [ClassicSimilarity], result of:
            0.063417666 = score(doc=2745,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.29862255 = fieldWeight in 2745, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.046875 = fieldNorm(doc=2745)
          0.038323715 = weight(_text_:22 in 2745) [ClassicSimilarity], result of:
            0.038323715 = score(doc=2745,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.23214069 = fieldWeight in 2745, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2745)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge engineering has been used to design an intelligent interface for the Juris-Data database, 1 of the largest case study databases in France. It was based on the legal classification elaborated by the Juris-Data group to index the cases. The system aims to to help users find the case study most relevant to their own. A methodology for the construction of legal classification of the primary document was designed together with a framework for index construction. This led to the implementation of a Legal Case Studies Engineering Framework based on the accumulated experimentation and the methodologies designed. It consists of a set of computerized tools which support the life cycle of the legal document from their processing by legal experts to their consultation by clients
    Date
    22. 1.1999 19:20:11
  14. Huth, M.: Symbolic and sub-symbolic knowledge organization in the Computational Theory of Mind (1995) 0.03
    0.028261498 = product of:
      0.08478449 = sum of:
        0.08478449 = sum of:
          0.05284806 = weight(_text_:methodology in 1086) [ClassicSimilarity], result of:
            0.05284806 = score(doc=1086,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.24885213 = fieldWeight in 1086, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1086)
          0.03193643 = weight(_text_:22 in 1086) [ClassicSimilarity], result of:
            0.03193643 = score(doc=1086,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.19345059 = fieldWeight in 1086, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1086)
      0.33333334 = coord(1/3)
    
    Abstract
    We sketch the historic transformation of culturally grown techniques of symbol manipulation, such as basic arithmetic in the decimal number system, to the full-fledges version of the Computational Theory of Mind. Symbol manipulation systems had been considered by Leibniz as a methodology of inferring knowledge in a secure and purely mechanical fashion. Such 'inference calculi' were considered as mer artefacts which could not possibly encompass als human knowldge acquisition. In Alan Turing's work one notices a crucial shift of perspective. The abstract mathematical states of a Turing machine (a kind of 'calculus universalis' that Leibniz was looking for) are claimed to correspond th equivalent psychological states. Artefacts are turned into faithful models of human cognition. A further step toward the Computational Theory of Mind was the physical symbol system hypothesis, contending to have found a necessary and sifficient criterion for the presence of 'intelligence' in operative mediums. This, together with Chomsky's foundational work on linguistics, led naturally to the Computational Theory of Mind as set out by Jerry Fodor and Zenon Pylshyn. We discuss problematic aspects of this theory. Then we deal with another paradigm of the Computational Theory of Mind based on network automata. This sub-symbolic paradigm seems to avoid problems occuring in symbolic computations, like the proble 'frame problem' and 'graceful degradation'
    Source
    Knowledge organization. 22(1995) no.1, S.10-17
  15. Efthimiadis, E.N.: User choices : a new yardstick for the evaluation of ranking algorithms for interactive query expansion (1995) 0.03
    0.028261498 = product of:
      0.08478449 = sum of:
        0.08478449 = sum of:
          0.05284806 = weight(_text_:methodology in 5697) [ClassicSimilarity], result of:
            0.05284806 = score(doc=5697,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.24885213 = fieldWeight in 5697, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5697)
          0.03193643 = weight(_text_:22 in 5697) [ClassicSimilarity], result of:
            0.03193643 = score(doc=5697,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.19345059 = fieldWeight in 5697, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5697)
      0.33333334 = coord(1/3)
    
    Abstract
    The performance of 8 ranking algorithms was evaluated with respect to their effectiveness in ranking terms for query expansion. The evaluation was conducted within an investigation of interactive query expansion and relevance feedback in a real operational environment. Focuses on the identification of algorithms that most effectively take cognizance of user preferences. user choices (i.e. the terms selected by the searchers for the query expansion search) provided the yardstick for the evaluation of the 8 ranking algorithms. This methodology introduces a user oriented approach in evaluating ranking algorithms for query expansion in contrast to the standard, system oriented approaches. Similarities in the performance of the 8 algorithms and the ways these algorithms rank terms were the main focus of this evaluation. The findings demonstrate that the r-lohi, wpq, enim, and porter algorithms have similar performance in bringing good terms to the top of a ranked list of terms for query expansion. However, further evaluation of the algorithms in different (e.g. full text) environments is needed before these results can be generalized beyond the context of the present study
    Date
    22. 2.1996 13:14:10
  16. Abad-Garcia, M.F.; Goncàlez-Teruel, A.; Sanjuan-Nebot, L.: Information needs of physicians at the University Clinic Hospital in Valencia-Spain (1999) 0.03
    0.028261498 = product of:
      0.08478449 = sum of:
        0.08478449 = sum of:
          0.05284806 = weight(_text_:methodology in 291) [ClassicSimilarity], result of:
            0.05284806 = score(doc=291,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.24885213 = fieldWeight in 291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0390625 = fieldNorm(doc=291)
          0.03193643 = weight(_text_:22 in 291) [ClassicSimilarity], result of:
            0.03193643 = score(doc=291,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.19345059 = fieldWeight in 291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=291)
      0.33333334 = coord(1/3)
    
    Abstract
    The study of information needs has been a subject of attention for library and information science professionals for more than four decades, and has led to the publication of a great amount of literature. Among the reasons that lead to this interest, we can mention, on the one hand, the utility that the results of this type of research have in improving mechanisms of providing information in the professional environment and, on the other hand, no less important, the recognition of problems in methodology which are revealed when studies which have already been reported are analysed (Gorman, 1995; Forsyte, et al., 1992). One of the reasons for this kind of research is, without doubt, the need to harmonise the potential that the new technologies offer for accessing and managing large quantities of information with the information needs of the users. Its objective is to provide appropriate information systems for each environment, in this case, the medical field (Timpka, et al., 1989; Forsyte, et al., 1992; Gorman, 1995; Gorman & Helfand 1995; Abad-Garcia, 1997).
    Date
    22. 3.2002 9:43:33
  17. Wijnhoven. F.; Wognum, P.M.; Weg, R.L.W. van de: Knowledge ontology development (1996) 0.03
    0.028261498 = product of:
      0.08478449 = sum of:
        0.08478449 = sum of:
          0.05284806 = weight(_text_:methodology in 907) [ClassicSimilarity], result of:
            0.05284806 = score(doc=907,freq=2.0), product of:
              0.21236731 = queryWeight, product of:
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.047143444 = queryNorm
              0.24885213 = fieldWeight in 907, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.504705 = idf(docFreq=1328, maxDocs=44218)
                0.0390625 = fieldNorm(doc=907)
          0.03193643 = weight(_text_:22 in 907) [ClassicSimilarity], result of:
            0.03193643 = score(doc=907,freq=2.0), product of:
              0.16508831 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047143444 = queryNorm
              0.19345059 = fieldWeight in 907, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=907)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge-containing documents and data about knowledge have been handled in stable environments by bureaucratic systems using very stable knowledge ontologies. These systems, though not always very effective in such environments, will become highly ineffective in environments where knowledge has to be updated and replaced frequently. Moreover, organizations in such dynamic environments also use knowledge from extemal resources extensively. This makes the development of a stable ontology for knowledge storage and retrieval particularly complicated. This paper describes eight context classes of knowledge ontology development and explores elements of a method for ontology development. These classes are based an the differences in contexts defined along three dimensions: knowledge dynamics, complexity and social dispersion. Ontology development matches these contexts and ontology needs defined by (logical and social) structure and ontology maturity. The classification framework and methodology are applied to two cases. The first case illustrates a descriptive use of our framework to characterize ontology development in an academic environment. The second case illustrates a normative use of our framework. The method proposed seemed to be empirically valid and rich and be useful for detecting options for ontology improvement.
    Source
    Knowledge management: organization competence and methodolgy. Proceedings of the Fourth International ISMICK Symposium, 21-22 October 1996, Netherlands. Ed.: J.F. Schreinemakers
  18. Furniss, P.: ¬A proposed methodology for examining the provision of subject access in the OPAC (1990) 0.03
    0.028185632 = product of:
      0.08455689 = sum of:
        0.08455689 = product of:
          0.16911379 = sum of:
            0.16911379 = weight(_text_:methodology in 352) [ClassicSimilarity], result of:
              0.16911379 = score(doc=352,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.7963268 = fieldWeight in 352, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.125 = fieldNorm(doc=352)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  19. Richards, L.; Richards, T.; Johnston, M.: Analysing unstructured information : can computers help? (1992) 0.02
    0.024912816 = product of:
      0.07473844 = sum of:
        0.07473844 = product of:
          0.14947689 = sum of:
            0.14947689 = weight(_text_:methodology in 3129) [ClassicSimilarity], result of:
              0.14947689 = score(doc=3129,freq=4.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.70386016 = fieldWeight in 3129, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3129)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes the analysis of unstructured information, particularly text. Discusses the information processing methodology and theory assumed by computer based qualitative data analysis software including the methodology of the non-numerical unstructured data indexing, searching and theorising system. Explains qualitative reasoning
  20. Noyons, E.C.M.; Raan, A.F.J. van: Monitoring scientific developments from a dynamic perspective : self-organized structuring to map neural network research (1998) 0.02
    0.024912816 = product of:
      0.07473844 = sum of:
        0.07473844 = product of:
          0.14947689 = sum of:
            0.14947689 = weight(_text_:methodology in 331) [ClassicSimilarity], result of:
              0.14947689 = score(doc=331,freq=4.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.70386016 = fieldWeight in 331, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.078125 = fieldNorm(doc=331)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    With the help of bibliometric mapping techniques, we have developed a methodology of 'self-organized' structuring of scientific fields. This methodology is applied to the field of neural network research

Languages

Types

  • a 1330
  • m 136
  • s 75
  • el 19
  • i 13
  • r 10
  • b 7
  • x 6
  • ? 5
  • d 3
  • p 2
  • au 1
  • h 1
  • n 1
  • More… Less…

Themes

Subjects

Classifications