Search (2593 results, page 1 of 130)

  • × year_i:[1990 TO 2000}
  1. Mehl, S.: Systematic alternatives in lexicalization : the cases of gerund translation (1996) 0.27
    0.26552922 = product of:
      0.3982938 = sum of:
        0.33534986 = weight(_text_:systematic in 532) [ClassicSimilarity], result of:
          0.33534986 = score(doc=532,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            1.0103624 = fieldWeight in 532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.125 = fieldNorm(doc=532)
        0.06294392 = product of:
          0.12588784 = sum of:
            0.12588784 = weight(_text_:22 in 532) [ClassicSimilarity], result of:
              0.12588784 = score(doc=532,freq=2.0), product of:
                0.20335917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05807226 = queryNorm
                0.61904186 = fieldWeight in 532, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=532)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    31. 7.1996 9:22:19
  2. Schroeder, K.A.: Layered indexing of images (1998) 0.15
    0.15265805 = product of:
      0.22898707 = sum of:
        0.042293023 = product of:
          0.12687907 = sum of:
            0.12687907 = weight(_text_:objects in 4640) [ClassicSimilarity], result of:
              0.12687907 = score(doc=4640,freq=2.0), product of:
                0.3086582 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05807226 = queryNorm
                0.41106653 = fieldWeight in 4640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4640)
          0.33333334 = coord(1/3)
        0.18669404 = sum of:
          0.13161811 = weight(_text_:indexing in 4640) [ClassicSimilarity], result of:
            0.13161811 = score(doc=4640,freq=8.0), product of:
              0.22229293 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.05807226 = queryNorm
              0.5920931 = fieldWeight in 4640, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4640)
          0.05507593 = weight(_text_:22 in 4640) [ClassicSimilarity], result of:
            0.05507593 = score(doc=4640,freq=2.0), product of:
              0.20335917 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05807226 = queryNorm
              0.2708308 = fieldWeight in 4640, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4640)
      0.6666667 = coord(2/3)
    
    Abstract
    The General Motors Media Archives (GMMA) project is undertaking one of the largest digitization efforts in the world. GMMA houses over 3 million still photographic images and tens of thousands of motion picture films and videos spanning over a hundred years. The images are a rich history of the evolution of transport, urban growth, fashion, design, and popular culture. GMMA has developed a layered approach to visual indexing that dissects the objects, style and implication of each image, so that the indexing system can accomodate all potential approaches to the material. Explains each layer of indexing and provides examples which show implication layers that can easily be missed
    Date
    9. 4.2000 17:22:00
  3. Fugmann, R.: Subject analysis and indexing : theoretical foundation and practical advice (1993) 0.15
    0.15154323 = product of:
      0.22731484 = sum of:
        0.14671555 = weight(_text_:systematic in 8756) [ClassicSimilarity], result of:
          0.14671555 = score(doc=8756,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.44203353 = fieldWeight in 8756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8756)
        0.08059929 = product of:
          0.16119859 = sum of:
            0.16119859 = weight(_text_:indexing in 8756) [ClassicSimilarity], result of:
              0.16119859 = score(doc=8756,freq=12.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.7251629 = fieldWeight in 8756, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=8756)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Enthält folgende Kapitel: Information and information systems; Information system survival power; Theoretical considerations on information storage and retrieval; Indexing (representation of the essence of documents; extractive, assignment, consistent indexing, indexing and abstracting, book indexing, index language vocabulary, syntax, concept analysis, evaluation of indexing quality); Technology of information supply; Glossary of terms used; Systematic and 'basic index'
  4. Faraj, N.: Analyse d'une methode d'indexation automatique basée sur une analyse syntaxique de texte (1996) 0.15
    0.14723778 = product of:
      0.22085667 = sum of:
        0.16767493 = weight(_text_:systematic in 685) [ClassicSimilarity], result of:
          0.16767493 = score(doc=685,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.5051812 = fieldWeight in 685, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0625 = fieldNorm(doc=685)
        0.053181745 = product of:
          0.10636349 = sum of:
            0.10636349 = weight(_text_:indexing in 685) [ClassicSimilarity], result of:
              0.10636349 = score(doc=685,freq=4.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.47848347 = fieldWeight in 685, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=685)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Evaluates an automatic indexing method based on syntactical text analysis combined with statistical analysis. Tests many combinations for the choice of term categories and weighting methods. The experiment, conducted on a software engineering corpus, shows systematic improvement in the use of syntactic term phrases compared to using only individual words as index terms
    Footnote
    Übers. d. Titels: Analysis of an automatic indexing method based on syntactic analysis of text
  5. Balaban, M.: ¬The music structures approach to knowledge representation for music processing (1996) 0.14
    0.14400655 = product of:
      0.21600981 = sum of:
        0.04833488 = product of:
          0.14500464 = sum of:
            0.14500464 = weight(_text_:objects in 176) [ClassicSimilarity], result of:
              0.14500464 = score(doc=176,freq=2.0), product of:
                0.3086582 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05807226 = queryNorm
                0.46979034 = fieldWeight in 176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0625 = fieldNorm(doc=176)
          0.33333334 = coord(1/3)
        0.16767493 = weight(_text_:systematic in 176) [ClassicSimilarity], result of:
          0.16767493 = score(doc=176,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.5051812 = fieldWeight in 176, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0625 = fieldNorm(doc=176)
      0.6666667 = coord(2/3)
    
    Abstract
    Introduces a framework for formal design and construction of music systems. It is demonsrated using the music structures approach, starting with an ontology of music objects, and ending with symbolic and visual representation frameworks and their implementations. A visual formalism based on the music structures approach is introduced. A systematic development of knowledge-representation frameworks for music is essential for obtaining manageable, reliable, user-friendly music processing tools such as composition systems. It is also essential for deepening our understanding of the capabilities and limitations of a computational account for music
  6. Nielsen, H.J.: ¬The nature of fiction and its significance for classification and indexing (1997) 0.14
    0.14168307 = product of:
      0.21252461 = sum of:
        0.14671555 = weight(_text_:systematic in 1785) [ClassicSimilarity], result of:
          0.14671555 = score(doc=1785,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.44203353 = fieldWeight in 1785, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1785)
        0.065809056 = product of:
          0.13161811 = sum of:
            0.13161811 = weight(_text_:indexing in 1785) [ClassicSimilarity], result of:
              0.13161811 = score(doc=1785,freq=8.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.5920931 = fieldWeight in 1785, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1785)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Considers the nature of fiction in relation to classification and indexing systems. Literary theory today is very heterogeneous. In designing an indexing system a single trend of school should not be chosen. Following a systematic extension and development of the 'how' facet of fictional documents is an useful approach. Themes should be a visible aspect in classification and indexing systems. Aspects of literary history, period, literary movement and influence should be noted
  7. Bhattacharyya, G.: Information: its definition for its service professionals (1997) 0.14
    0.1368534 = product of:
      0.2052801 = sum of:
        0.16767493 = weight(_text_:systematic in 277) [ClassicSimilarity], result of:
          0.16767493 = score(doc=277,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.5051812 = fieldWeight in 277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0625 = fieldNorm(doc=277)
        0.037605174 = product of:
          0.07521035 = sum of:
            0.07521035 = weight(_text_:indexing in 277) [ClassicSimilarity], result of:
              0.07521035 = score(doc=277,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.3383389 = fieldWeight in 277, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=277)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Analyses the elements which make up the term 'information' so that a systematic strategy for defining 'information' can be arrived at and applied by those professionals engaged in providing information about sources of information, subject classification and indexing, and abstracting. Discusses the processes of communication ('self-communication' and communication with others) and their relationship with knowledge; knowing, remembering and learning; organizations and association; the role of language in communication; information, knowledge and data; and the distinction between the medium of expression and the actual message conveyed
  8. Jones, P.A.; Bradbeer, P.V.G.: Discovery of optimal weights in a concept selection system (1996) 0.13
    0.13276461 = product of:
      0.1991469 = sum of:
        0.16767493 = weight(_text_:systematic in 6974) [ClassicSimilarity], result of:
          0.16767493 = score(doc=6974,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.5051812 = fieldWeight in 6974, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0625 = fieldNorm(doc=6974)
        0.03147196 = product of:
          0.06294392 = sum of:
            0.06294392 = weight(_text_:22 in 6974) [ClassicSimilarity], result of:
              0.06294392 = score(doc=6974,freq=2.0), product of:
                0.20335917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05807226 = queryNorm
                0.30952093 = fieldWeight in 6974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6974)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes the application of weighting strategies to model uncertainties and probabilities in automatic abstracting systems, particularly in the concept selection phase. The weights were originally assigned in an ad hoc manner and were then refined by manual analysis of the results. The new method attempts to derive a more systematic methods and performs this using a genetic algorithm
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  9. Moore, N.: ¬The British national information strategy (1998) 0.13
    0.13276461 = product of:
      0.1991469 = sum of:
        0.16767493 = weight(_text_:systematic in 3036) [ClassicSimilarity], result of:
          0.16767493 = score(doc=3036,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.5051812 = fieldWeight in 3036, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0625 = fieldNorm(doc=3036)
        0.03147196 = product of:
          0.06294392 = sum of:
            0.06294392 = weight(_text_:22 in 3036) [ClassicSimilarity], result of:
              0.06294392 = score(doc=3036,freq=2.0), product of:
                0.20335917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05807226 = queryNorm
                0.30952093 = fieldWeight in 3036, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3036)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The UK has not followed other countries in developing framworks of policies to guide their transition into information societies in a consistent and systematic way. Analyzes the current UK policies using a matrix which identifies 3 levels of policy (industrial, organization and social) and 4 cross cutting themes (information technology, information markets, human resources and legislation and regulation). Concludes that together, these various initiatives add up to a national strategy but it is one that lacks coordination and cohesion
    Date
    22. 2.1999 17:03:18
  10. Seiler, R.J.: Enhancing Internet access for people with disabilities (1998) 0.13
    0.13276461 = product of:
      0.1991469 = sum of:
        0.16767493 = weight(_text_:systematic in 3609) [ClassicSimilarity], result of:
          0.16767493 = score(doc=3609,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.5051812 = fieldWeight in 3609, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0625 = fieldNorm(doc=3609)
        0.03147196 = product of:
          0.06294392 = sum of:
            0.06294392 = weight(_text_:22 in 3609) [ClassicSimilarity], result of:
              0.06294392 = score(doc=3609,freq=2.0), product of:
                0.20335917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05807226 = queryNorm
                0.30952093 = fieldWeight in 3609, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3609)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The EIA project was funded by the Online Public Access Initiative, a federal initiative of the australian Department of communications and the Arts. It was designed to establish a systematic method to introduce the Web to clients who have physical disability, are housebound, elderly, or are cognitively impaired. It was a touchscreen and kiosk Web browser to assist in overcoming various physical or cognitive hurdles
    Date
    1. 8.1996 22:08:06
  11. Zackland, M.; Fontaine, D.: Systematic building of conceptual classification systems with C-KAT (1996) 0.13
    0.12600572 = product of:
      0.18900858 = sum of:
        0.042293023 = product of:
          0.12687907 = sum of:
            0.12687907 = weight(_text_:objects in 5145) [ClassicSimilarity], result of:
              0.12687907 = score(doc=5145,freq=2.0), product of:
                0.3086582 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05807226 = queryNorm
                0.41106653 = fieldWeight in 5145, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5145)
          0.33333334 = coord(1/3)
        0.14671555 = weight(_text_:systematic in 5145) [ClassicSimilarity], result of:
          0.14671555 = score(doc=5145,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.44203353 = fieldWeight in 5145, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5145)
      0.6666667 = coord(2/3)
    
    Abstract
    C-KAT is a method and a tool which supports the design of feature oriented classification systems for knowlegde based systems. It uses a specialized Heuristic Classification conceptual model named 'classification by structural shift' which sees the classification process as the matching of different classifications of the same set of objects or situations organized around different structural principles. To manage the complexity induced by the cross-product, C-KAT supports the use of a leastcommittment strategy which applies in a context of constraint-directed reasoning. Presents this method using an example from the field of industrial fire insurance
  12. Blake, D.: Indexing the medical and biological sciences (1995) 0.13
    0.12532496 = product of:
      0.3759749 = sum of:
        0.3759749 = sum of:
          0.297295 = weight(_text_:indexing in 768) [ClassicSimilarity], result of:
            0.297295 = score(doc=768,freq=20.0), product of:
              0.22229293 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.05807226 = queryNorm
              1.337402 = fieldWeight in 768, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.078125 = fieldNorm(doc=768)
          0.078679904 = weight(_text_:22 in 768) [ClassicSimilarity], result of:
            0.078679904 = score(doc=768,freq=2.0), product of:
              0.20335917 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05807226 = queryNorm
              0.38690117 = fieldWeight in 768, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=768)
      0.33333334 = coord(1/3)
    
    COMPASS
    Science / Subject indexing
    Date
    26. 7.2002 13:22:18
    LCSH
    Medicine / Abstracting and indexing
    Biology / Abstracting and indexing
    Life sciences / Abstracting and indexing
    Series
    Occasional papers on indexing; no.3
    Subject
    Medicine / Abstracting and indexing
    Biology / Abstracting and indexing
    Life sciences / Abstracting and indexing
    Science / Subject indexing
  13. Wissen in elektronischen Netzwerken : Strukturierung, Erschließung und Retrieval von Informationsressourcen im Internet. Eine Auswahl von Vorträgen der 19. Jahrestagung der Gesellschaft für Klassifikation, Basel 1995 (1995) 0.12
    0.11974673 = product of:
      0.17962009 = sum of:
        0.14671555 = weight(_text_:systematic in 3198) [ClassicSimilarity], result of:
          0.14671555 = score(doc=3198,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.44203353 = fieldWeight in 3198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3198)
        0.032904528 = product of:
          0.065809056 = sum of:
            0.065809056 = weight(_text_:indexing in 3198) [ClassicSimilarity], result of:
              0.065809056 = score(doc=3198,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.29604656 = fieldWeight in 3198, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3198)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Enthält die Beiträge: PFEFFER, H.-J.: Gopher und Veronica; KOCH, T.: Searching the Web: systematic overview over indexes; JANKA, D.: Online-Bibliothekskataloge in Gopher und World Wide Web; PRICE, D.: Indexing the world: current developments in accessing distributed information; RUSCH-FEJA, D.D.: Structuring subject information sources in the Internet; KEMPF, A.: Forstliche Klassifikation und Meta-Information zum Wald im Internet; KOCH, T.: Improving resource discovery and retrieval on the Internet: the Nordic WAIS/World Wide Web Project and the classification of WAIS databases; ASSFOLG, R. u. R. HAMMWOEHNER: Das Konstanzer Hypertext-System (KHS) und das Worldwide Web (WWW): Mehrwert durch Integration
  14. Garcia Marco, F.J.: Contexto y determinantes funcionales de la clasificacion documental (1996) 0.12
    0.11974673 = product of:
      0.17962009 = sum of:
        0.14671555 = weight(_text_:systematic in 380) [ClassicSimilarity], result of:
          0.14671555 = score(doc=380,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.44203353 = fieldWeight in 380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=380)
        0.032904528 = product of:
          0.065809056 = sum of:
            0.065809056 = weight(_text_:indexing in 380) [ClassicSimilarity], result of:
              0.065809056 = score(doc=380,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.29604656 = fieldWeight in 380, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=380)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Considers classification in the context of the information retrieval chain, a communication process. Defines classification as an heuristic methodology, which is being improved through scientific methodology. It is also an indexing process, setting each document in a systematic order, in a predictable place and therefore able to be efficiently retrieved. Classification appears to be determined by 4 factors: the structure of the world of documents, a function of the world of knowledge; the classification tools that allow us to codify them; the way in which people create and use classifications; and the features of the information unit
  15. Rampelmann, J.: Classification tools at the EPO (1996) 0.12
    0.11974673 = product of:
      0.17962009 = sum of:
        0.14671555 = weight(_text_:systematic in 2655) [ClassicSimilarity], result of:
          0.14671555 = score(doc=2655,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.44203353 = fieldWeight in 2655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2655)
        0.032904528 = product of:
          0.065809056 = sum of:
            0.065809056 = weight(_text_:indexing in 2655) [ClassicSimilarity], result of:
              0.065809056 = score(doc=2655,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.29604656 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2655)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes the European Classification System (ECLA) used by search examiners at the General Directorate 1 (DG 1) of the European Patent Office in The Hague, Netherlands, for classifying patent applications for publication, new patent documents and non-patent literature for addition to the systematic search documentation. ECLA is an internal classification tool developed on the basis of the International Patent Classification (IPC) but with further subdivisions and better adaptation to the different technical fields. In order to overcome the limitations of a classification system, secondary systems are also in use or under development. One such system is the ICO (In Computer Only) system which offers online only light indexing schenmes for the identification of additional information contained in patent documents
  16. Ewbank, L.: Crisis in subject cataloging and retrieval (1996) 0.12
    0.11724177 = sum of:
      0.027020022 = product of:
        0.08106007 = sum of:
          0.08106007 = weight(_text_:objects in 5580) [ClassicSimilarity], result of:
            0.08106007 = score(doc=5580,freq=10.0), product of:
              0.3086582 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.05807226 = queryNorm
              0.2626208 = fieldWeight in 5580, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.015625 = fieldNorm(doc=5580)
        0.33333334 = coord(1/3)
      0.041918732 = weight(_text_:systematic in 5580) [ClassicSimilarity], result of:
        0.041918732 = score(doc=5580,freq=2.0), product of:
          0.33191046 = queryWeight, product of:
            5.715473 = idf(docFreq=395, maxDocs=44218)
            0.05807226 = queryNorm
          0.1262953 = fieldWeight in 5580, product of:
            1.4142135 = tf(freq=2.0), with freq of:
              2.0 = termFreq=2.0
            5.715473 = idf(docFreq=395, maxDocs=44218)
            0.015625 = fieldNorm(doc=5580)
      0.048303016 = sum of:
        0.032567035 = weight(_text_:indexing in 5580) [ClassicSimilarity], result of:
          0.032567035 = score(doc=5580,freq=6.0), product of:
            0.22229293 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.05807226 = queryNorm
            0.14650504 = fieldWeight in 5580, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.015625 = fieldNorm(doc=5580)
        0.01573598 = weight(_text_:22 in 5580) [ClassicSimilarity], result of:
          0.01573598 = score(doc=5580,freq=2.0), product of:
            0.20335917 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.05807226 = queryNorm
            0.07738023 = fieldWeight in 5580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.015625 = fieldNorm(doc=5580)
    
    Footnote
    Arlene G. Taylor, (University of Pittsburgh), in her talk "Introduction to the Crisis," stated that there has been an erosion of confidence in subject cataloging, which is frequently thought not to be cost-effective. Signs of the crisis are 1) an administrative push to cut back or eliminate subject cataloging, 2) lack of sufficient education in the theory and practice of subject analysis, leading to a lack of understanding on the part of non-catalogers, 3) a widespread negative view of Library of Congress Subject Headings (LCSH), and 4) a view of classification as only a way of arranging items on a shelf, and therefore clearly dispensable in an age of online information. Reasons for the erosion of confidence are 1) the availability of keyword searching, which many people think is sufficient, 2) the difficulty of subject analysis in an expanding universe of knowledge--including the increasing variety of materials, and of different formats, not all of which are suitable for traditional subject analysis--increasing variation of word usage even in the same language, the appearance of new subjects requiring new terminology, and the use of multiple thesauri with little or no attempt to relate them to each other), and 3) the "since it can't be perfect" syndrome, i.e., since subject analysis is subjective anyway, so why bother? Francis Miksa, (University of Texas at Austin), spoke about "Bibliographic Control Traditions and Subject Access in Library Catalogs". Suggesting that we need a broader perspective, partly historical, and a new approach and methodology, he discussed 1) bibliographic control as a general model and the various traditions of bibliographic control, and 2) the measure of a single bibliographic item, and how much information about it belongs in an entry in a bibliographic control system. Bibliographic control is any attempt to gain power over the information-bearing objects which comprise the bibliographic universe. The universe of knowledge is intangible and ordered, and resides in information-bearing objects, while the bibliographic universe is tangible--being made up of objects--but unordered; bibliographic control consists of identifying and ordering bibliographic objects so that they can be retrieved and used to help people reach the universe of knowledge. The types of bibliographic control that have arisen are--in chronological order--1) bibliography, 2) library cataloging, 3) indexing and abstracting, 4) documentation and information storage and retrieval, 5) archival enterprises, and 6) records management. The nature of a single bibliographic unit--that is, the basis of an entry in a bibliographic organization system--differs among these traditions of practice: in archives, it is a collection from a single source, in records management a group of records, and in library cataloging it was originally one book containing one work by one author.
    The first breakdown of this ideal was the appearance of information-bearing objects containing more than one work, such as transactions of learned societies, periodicals, etc.; the solution to this breakdown was analytical cataloging, and the result was the rise of indexing and documentation. The second breakdown, originating in indexing and abstracting, was the discovery that subject access is not limited to a work as a single bibliographic item, and that it is not simply concerned with "aboutness". The response to the second breakdown was the fragmentation of the concept of the unity of a work into the concept of the work as a conglomeration of topics, forms, and genres. Therefore, library cataloging is two breakdowns behind, and still operating with a simplistic view of a document as a unit. Thomas Mann, (Library of Congress), spoke about "Cataloging and Reference Work". His first topic was the continuing need for subject classification of books (i.e., for subject arrangement of books on shelves). He gave two examples of information that could be found only by taking books in a particular subject area off the shelves and looking through each one for the relevant information. The information exists in these books at the page and paragraph level, and this kind of searching could not be done if the books were not organized on the shelves by subject. Scholars, students, and journalists use this type of search quite often, but librarians generally ignore it or say that it is unimportant (partly because it can't be computerized, and some librarians think anything that can't be computerized is unimportant). The quality and level of research that can be done in libraries would be greatly diminished if this kind of searching became impossible. Mann's second topic was the importance of specific entry in a controlled vocabulary. Use of the most specific entry is being abandoned because of the increased use of copy cataloging; general headings are being accepted in place of specific ones, and this leads to disaster. The items are effectively lost, because one never knows where to stop with general headings (since all general headings are potentially applicable), whereas with a specific heading, one stops when one finds the heading that fits most closely with the subject one wants> If works dealing with this subject all had the specific heading, one could then be sure that one had found all the works in the library on this subject.
    The third topic was that the crisis is mainly due to reference and bibliographic instruction librarians, who are not telling users how to use the retrieval systems created by catalogers. They should tell users about the red books, about the importance of Narrower Terms (NT, including those that are alphabetically adjacent to Broader Terms (BT) as these cannot be found in screen displays), about the usefulness of subject headings from records for relevant items located by author, title, or keyword for finding similar items. (Of course, this will not work if the headings are at the wrong level of specificity!); and about the subdivisions of subject headings. Some bibliographic instruction librarians are telling users not to use LCSH, so the users are missing many--sometimes most--of the relevant items. If the retrieval system is going to work, reference and bibliographic instruction librarians have to explain how subject headings work, rather than concealing or even disparaging them. Michael Gorman, (California State University--Fresno), talked about "The Cost and Value of Organized Subject Access," saying that systematic subject access is the key to effective use of libraries, and it is therefore both cost-effective and cost- beneficial, even though many administrators don't think so. But there are problems, both inherently and in application. Good subject access maximizes both recall and relevance. Specificity is extremely important; it best meets the needs of most users, because the cataloger has already differentiated the items. It is also extremely important that a verbal subject system have a syndetic structure, so that the user can explore broader, narrower, and related subjects. The time spent by the cataloger in creating subject headings should be inversely proportional to the time spent by the user on retrieval; the canon of service of our profession demands adding that value at the front end instead of shifting the burden to (infinite numbers of) users. Direct and indirect benefits to the user increase with the amount of time spent on subject headings; if we believe that the whole purpose of a library is to make its collection accessible, we can't afford not to provide detailed access to collections. Effective retrieval is impossible without authority control (which however is free, since it is just cataloging done right). Gorman contrasted the "howling desert" of the Internet with the well-ordered world of libraries, comparing the Internet to a used bookstore in which the bindings, indexes, and front matter have been removed from all the books and they are arranged in no order. The user searches for clumps of related material, but has no idea of its source. It may seem ordinary to go into the largest library and be able to find a specific item, secure in provenance and immediately usable, but this is beyond the wildest dreams of Net-surfers. We need fast and efficient access to recorded knowledge and information, because we have lives to live and can't spend time surfing; subject access is an essential part of this, and is vital for future seekers of truth.
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.90-97
  17. Will, L.: ¬The indexing of museum objects (1993) 0.11
    0.11458674 = product of:
      0.17188011 = sum of:
        0.09666976 = product of:
          0.2900093 = sum of:
            0.2900093 = weight(_text_:objects in 6101) [ClassicSimilarity], result of:
              0.2900093 = score(doc=6101,freq=2.0), product of:
                0.3086582 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05807226 = queryNorm
                0.9395807 = fieldWeight in 6101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.125 = fieldNorm(doc=6101)
          0.33333334 = coord(1/3)
        0.07521035 = product of:
          0.1504207 = sum of:
            0.1504207 = weight(_text_:indexing in 6101) [ClassicSimilarity], result of:
              0.1504207 = score(doc=6101,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.6766778 = fieldWeight in 6101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.125 = fieldNorm(doc=6101)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  18. Wendl, T.: Why they don't see what they could see : on the perception of photographs from a cross-cultural-perspective (1996) 0.11
    0.10800491 = product of:
      0.16200736 = sum of:
        0.036251165 = product of:
          0.10875349 = sum of:
            0.10875349 = weight(_text_:objects in 5225) [ClassicSimilarity], result of:
              0.10875349 = score(doc=5225,freq=2.0), product of:
                0.3086582 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05807226 = queryNorm
                0.35234275 = fieldWeight in 5225, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5225)
          0.33333334 = coord(1/3)
        0.12575619 = weight(_text_:systematic in 5225) [ClassicSimilarity], result of:
          0.12575619 = score(doc=5225,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.3788859 = fieldWeight in 5225, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=5225)
      0.6666667 = coord(2/3)
    
    Abstract
    The paper examines studies on the difficulties in understanding photographs from a cross-cultural perspective. These difficulties range from the complete failure to recognize a photograph as a representation of objects to the inability to recognize the spatial layout of the object represented. 2 kinds of explanations are offered: observer-specific and medium-specific. It is argued that photography produces pictures in a fairly rough 'precultural' state that often sharply contradicts indigenous experiences of seeing and depicting. Adopting the theoretical framework of conventionalism developed by Marx Wartofsky (1980) and others, the paper emphasizes and illustrates the cultural embedding of 'visual practice'. Following a discussion of other recent related empirical findings, the need is stressed for more detailed and systematic research on the interrelations between culture-specific pictorial traditions and corresponding viewing habits
  19. Bell, H.K.: Indexing biographies, and other stories of human lives (1992) 0.11
    0.106682315 = product of:
      0.32004693 = sum of:
        0.32004693 = sum of:
          0.22563106 = weight(_text_:indexing in 5396) [ClassicSimilarity], result of:
            0.22563106 = score(doc=5396,freq=8.0), product of:
              0.22229293 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.05807226 = queryNorm
              1.0150168 = fieldWeight in 5396, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.09375 = fieldNorm(doc=5396)
          0.09441587 = weight(_text_:22 in 5396) [ClassicSimilarity], result of:
            0.09441587 = score(doc=5396,freq=2.0), product of:
              0.20335917 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05807226 = queryNorm
              0.46428138 = fieldWeight in 5396, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=5396)
      0.33333334 = coord(1/3)
    
    COMPASS
    Subject indexing
    Footnote
    Rez. in: Knowledge organization 22(1995) no.1, S.46-47 (R. Fugmann)
    Series
    Occasional papers on indexing; no.1
    Subject
    Subject indexing
  20. Boynton, J.: Identifying systematic reviews in MEDLINE : developing an objective approach to search strategy design (1998) 0.10
    0.102679506 = product of:
      0.3080385 = sum of:
        0.3080385 = weight(_text_:systematic in 2660) [ClassicSimilarity], result of:
          0.3080385 = score(doc=2660,freq=12.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.9280771 = fieldWeight in 2660, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=2660)
      0.33333334 = coord(1/3)
    
    Abstract
    Systematic reviews are becoming increasingly important for health care professionals seeking to provide evidence based health care. In the past, systematic reviews have been difficult to identify among the mass of literature labelled 'reviews'. Reports results of a study to design search strategies based on a more objective approach to strategy construction. MEDLINE was chosen as the database and word frequencies in the titles, abstracts and subject keywords of a collection of systematic reviews of the effective health care interventions were analyzed to derive a highly sensitive search strategy. 'Sensitivity' was used in preference to the usual term 'recall' as one of the measures (in addition to the usual 'precision'). The proposed strategy was found to offer 98% sensitivity in retrieving systematic reviews, while retaining a low but acceptable level of precision (20%). Reports results using other strategies with other levels of sensitivity and precision. Concludes that it is possible to use frequency analysis to generate highly sensitive strategies when retrieving systematic reviews

Authors

Languages

Types

  • a 2244
  • m 188
  • s 116
  • el 41
  • r 18
  • i 14
  • b 10
  • x 7
  • ? 5
  • d 5
  • p 5
  • au 1
  • h 1
  • n 1
  • More… Less…

Themes

Subjects

Classifications