Search (337 results, page 17 of 17)

  1. Intner, S.S.; Lazinger, S.S.; Weihs, J.: Metadata and its impact on libraries (2005) 0.02
    0.019380992 = product of:
      0.051682647 = sum of:
        0.031779464 = weight(_text_:storage in 339) [ClassicSimilarity], result of:
          0.031779464 = score(doc=339,freq=4.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.17027639 = fieldWeight in 339, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.015625 = fieldNorm(doc=339)
        0.009794091 = weight(_text_:retrieval in 339) [ClassicSimilarity], result of:
          0.009794091 = score(doc=339,freq=4.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.09452859 = fieldWeight in 339, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.015625 = fieldNorm(doc=339)
        0.010109092 = weight(_text_:systems in 339) [ClassicSimilarity], result of:
          0.010109092 = score(doc=339,freq=4.0), product of:
            0.10526281 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.034252144 = queryNorm
            0.096036695 = fieldWeight in 339, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.015625 = fieldNorm(doc=339)
      0.375 = coord(3/8)
    
    LCSH
    Information storage and retrieval systems
    Subject
    Information storage and retrieval systems
  2. Zelger, J.: ¬A dialogic networking approach to information retrieval (1994) 0.02
    0.01837309 = product of:
      0.07349236 = sum of:
        0.05617869 = weight(_text_:storage in 6917) [ClassicSimilarity], result of:
          0.05617869 = score(doc=6917,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.30100897 = fieldWeight in 6917, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6917)
        0.01731367 = weight(_text_:retrieval in 6917) [ClassicSimilarity], result of:
          0.01731367 = score(doc=6917,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.16710453 = fieldWeight in 6917, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6917)
      0.25 = coord(2/8)
    
    Abstract
    The user of documents, e.g. of a library, faces the task of exploring the data fundus and of selecting information according to his actual intentions. He even may find or happen to find new aspects, and, following these, will further develop his original quest. How can the user be supported by PC-based procedures? Recently at the Institute of Philosophy, Univ. of Innsbruck, the GABEK method (Ganzheitliche Bewältigung sprachlich erfaßter Komplexität) was developed. It has proved useful so far in similar cases of ordering and/or retrieving information; especially to build hidden order structures and to incoporate them into information processing and storage facilities. It seems that the GABEK method might be applied successfully also to the user problems as mentioned above. To clarify his quest the user relies on a database1. This base contains experiences of previous users, which are expressed in natural language sentences. Through a PC-supported dialogue1, founded on database1, the user elaborates a more detailed concept of his own topic. This concept later is termed a 'linguistic gestalt', if it fulfils certain conditions. The linguistic gestalt may include 3 to 10 sentences in natural language, which specify the user's original intentions. The key terms contained in this linguistic gestalt will, in a dialogue2, be employed to retrieve relevant information from database2. Database2 represents the information system, e.g. a library. The procedures as indicated above and the building of linguistic gestalts can be effected by GABEK. Small quantities of data provided, the WINRELAN program (1993) may be used
  3. Brown, C.: ¬The changing face of scientific discourse : analysis of genomic and proteomic database usage and acceptance (2003) 0.02
    0.01837309 = product of:
      0.07349236 = sum of:
        0.05617869 = weight(_text_:storage in 1752) [ClassicSimilarity], result of:
          0.05617869 = score(doc=1752,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.30100897 = fieldWeight in 1752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1752)
        0.01731367 = weight(_text_:retrieval in 1752) [ClassicSimilarity], result of:
          0.01731367 = score(doc=1752,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.16710453 = fieldWeight in 1752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1752)
      0.25 = coord(2/8)
    
    Abstract
    The explosion of the field of molecular biology is paralleled by the growth in usage and acceptance of Webbased genomic and proteomic databases (GPD) such as GenBank and Protein Data Bank in the scholarly communication of scientists. Surveys, case studies, analysis of bibliographic records from Medline and CAPIus, and examination of "Instructions to Authors" sections of molecular biology journals all confirm the integral role of GPD in the scientific literature cycle. Over the past 20 years the place of GPD in the culture of molecular biologists was observed to move from tacit implication to explicit knowledge. Originally journals suggested deposition of data in GDP but by the Iate 1980s, the majority of journals mandated deposition of data for a manuscript to be accepted for publication. A surge subsequently occurred in the number of articles retrievable from Medline and CAPIus using the keyword "GenBank." GPD were not found to a new form of publication, but rather a fundamental storage and retrieval mechanism for vast amounts of molecular biology information that support the creation of scientific intellectual property. For science to continue to advance, scientists unequivocally agreed that GDP must remain free of peer-review and available at no charge to the public. The results suggest that the existing models of scientific communication should be updated to incorporate GDP data deposition into the current continuum of scientific communication.
  4. Michon, J.: Biomedicine and the Semantic Web : a knowledge model for visual phenotype (2006) 0.02
    0.01837309 = product of:
      0.07349236 = sum of:
        0.05617869 = weight(_text_:storage in 246) [ClassicSimilarity], result of:
          0.05617869 = score(doc=246,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.30100897 = fieldWeight in 246, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=246)
        0.01731367 = weight(_text_:retrieval in 246) [ClassicSimilarity], result of:
          0.01731367 = score(doc=246,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.16710453 = fieldWeight in 246, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=246)
      0.25 = coord(2/8)
    
    Abstract
    Semantic Web tools provide new and significant opportunities for organizing and improving the utility of biomedical information. As librarians become more involved with biomedical information, it is important for them, particularly catalogers, to be part of research teams that are employing these techniques and developing a high level interoperable biomedical infrastructure. To illustrate these principles, we used Semantic Web tools to create a knowledge model for human visual phenotypes (observable characteristics). This is an important foundation for generating associations between genomics and clinical medicine. In turn this can allow customized medical therapies and provide insights into the molecular basis of disease. The knowledge model incorporates a wide variety of clinical and genomic data including examination findings, demographics, laboratory tests, imaging and variations in DNA sequence. Information organization, storage and retrieval are facilitated through the use of metadata and the ability to make computable statements in the visual science domain. This paper presents our work, discusses the value of Semantic Web technologies in biomedicine, and identifies several important roles that library and information scientists can play in developing a more powerful biomedical information infrastructure.
  5. Roknuzzaman, M.; Kanai, H.; Umemoto, K.: Integration of knowledge management process into digital library system : a theoretical perspective (2009) 0.02
    0.01837309 = product of:
      0.07349236 = sum of:
        0.05617869 = weight(_text_:storage in 2971) [ClassicSimilarity], result of:
          0.05617869 = score(doc=2971,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.30100897 = fieldWeight in 2971, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2971)
        0.01731367 = weight(_text_:retrieval in 2971) [ClassicSimilarity], result of:
          0.01731367 = score(doc=2971,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.16710453 = fieldWeight in 2971, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2971)
      0.25 = coord(2/8)
    
    Abstract
    Purpose - The purpose of this paper is to develop a theoretical framework of an integrated digital library (DL) system based on knowledge management (KM) process. Design/methodology/approach - The study is based on viewpoints, review of existing concepts and frameworks of DL and KM, and the result of an interview of nine DL practitioners world-wide. The respondents are purposively selected from the participants" lists of two international conferences held in 2008. The interview is conducted through e-mail using a short, structured and open-ended questionnaire. Findings - The study finds some significant overlaps between DL and KM and argues that a generic KM process of acquisition, organization, storage and retrieval, and dissemination of knowledge with receiving feedbacks can suitably be fitted in DL. Thus an integrated DL system can be consisted of digital resources, technological infrastructure, experience and expertise, DL services and a KM process. The integration of KM can add value to developing a knowledge-based culture, management of intellectual assets, promotion of knowledge sharing, innovations in DL services and a strong leadership position for DL. Research limitations/implications - The research presents theoretical viewpoints of DL and KM, and the model, therefore, demands for practical investigation. Practical implications - The study suggests the adoption of KM process in DL system to enhance its effectiveness. Originality/value - The proposed model is an original work and theoretically, it would contribute to the advancement of academic debate in both the areas of DL and KM.
  6. Amirhosseini, M.: Quantitative evaluation of the movement from complexity toward simplicity in the structure of thesaurus descriptors (2015) 0.02
    0.01837309 = product of:
      0.07349236 = sum of:
        0.05617869 = weight(_text_:storage in 3695) [ClassicSimilarity], result of:
          0.05617869 = score(doc=3695,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.30100897 = fieldWeight in 3695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3695)
        0.01731367 = weight(_text_:retrieval in 3695) [ClassicSimilarity], result of:
          0.01731367 = score(doc=3695,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.16710453 = fieldWeight in 3695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3695)
      0.25 = coord(2/8)
    
    Abstract
    The concepts of simplicity and complexity play major roles in information storage and retrieval in knowledge organizations. This paper reports an investigation of these concepts in the structure of descriptors. The main purpose of simplicity is to decrease the number of words in the construction of descriptors as this idea affects semantic relations, recall and precision. ISO 25964 has affirmed the purpose of simplicity by requiring splitting compound terms into simpler concepts. This work aims to elaborate the standard methods of evaluation by providing a more detailed evaluation of the descriptors structure and identifying effective factors in simplicity and complexity results in the structure of thesauri descriptors. The research population is taken from the descriptors of the Commonwealth Agricultural Bureaux (CAB) Thesaurus, the Persian Cultural Thesaurus (ASFA) and the Chemical Thesaurus. This research was conducted using the statistical and content analysis method. In this research we propose a new quantitative approach as well as novel indicators and indices involving Simplicity and Factoring Ratios to evaluate the descriptors structure. The results will be useful in the verification, selection and maintenance purposes in knowledge organizations and the inquiry method can be further developed in the field of ontology evaluation.
  7. Blake, J.: Some issues in the classification of zoology (2011) 0.02
    0.01837309 = product of:
      0.07349236 = sum of:
        0.05617869 = weight(_text_:storage in 4845) [ClassicSimilarity], result of:
          0.05617869 = score(doc=4845,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.30100897 = fieldWeight in 4845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4845)
        0.01731367 = weight(_text_:retrieval in 4845) [ClassicSimilarity], result of:
          0.01731367 = score(doc=4845,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.16710453 = fieldWeight in 4845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4845)
      0.25 = coord(2/8)
    
    Abstract
    This paper identifies and discusses features of the classification of mammals that are relevant to the bibliographic classification of the subject. The tendency of zoological classifications to change, the differing sizes of groups of species, the use zoologists make of groupings other than taxa, and the links in zoology between classification and nomenclature, are identified as key themes the bibliographic classificationist needs to be aware of. The impact of cladistics, a novel classificatory method and philosophy adopted by zoologists in the last few decades, is identified as the defining feature of the current, rather turbulent, state of zoological classification. However because zoologists still employ some non-cladistic classifications, because cladistic classifications are in some way unsuited to optimal information storage and retrieval, and because some of their consequences for zoological classification are as yet unknown, bibliographic classifications cannot be modelled entirely on them.
  8. Järvelin, K.; Vakkari, P.: LIS research across 50 years: content analysis of journal articles : offering an information-centric conception of memes (2022) 0.02
    0.01837309 = product of:
      0.07349236 = sum of:
        0.05617869 = weight(_text_:storage in 949) [ClassicSimilarity], result of:
          0.05617869 = score(doc=949,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.30100897 = fieldWeight in 949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=949)
        0.01731367 = weight(_text_:retrieval in 949) [ClassicSimilarity], result of:
          0.01731367 = score(doc=949,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.16710453 = fieldWeight in 949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=949)
      0.25 = coord(2/8)
    
    Abstract
    Purpose This paper analyses the research in Library and Information Science (LIS) and reports on (1) the status of LIS research in 2015 and (2) on the evolution of LIS research longitudinally from 1965 to 2015. Design/methodology/approach The study employs a quantitative intellectual content analysis of articles published in 30+ scholarly LIS journals, following the design by Tuomaala et al. (2014). In the content analysis, we classify articles along eight dimensions covering topical content and methodology. Findings The topical findings indicate that the earlier strong LIS emphasis on L&I services has declined notably, while scientific and professional communication has become the most popular topic. Information storage and retrieval has given up its earlier strong position towards the end of the years analyzed. Individuals are increasingly the units of observation. End-user's and developer's viewpoints have strengthened at the cost of intermediaries' viewpoint. LIS research is methodologically increasingly scattered since survey, scientometric methods, experiment, case studies and qualitative studies have all gained in popularity. Consequently, LIS may have become more versatile in the analysis of its research objects during the years analyzed. Originality/value Among quantitative intellectual content analyses of LIS research, the study is unique in its scope: length of analysis period (50 years), width (8 dimensions covering topical content and methodology) and depth (the annual batch of 30+ scholarly journals).
  9. Cole, C.: ¬The consciousness' drive : information need and the search for meaning (2018) 0.02
    0.016415523 = product of:
      0.06566209 = sum of:
        0.0476692 = weight(_text_:storage in 480) [ClassicSimilarity], result of:
          0.0476692 = score(doc=480,freq=4.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.25541458 = fieldWeight in 480, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0234375 = fieldNorm(doc=480)
        0.017992895 = weight(_text_:retrieval in 480) [ClassicSimilarity], result of:
          0.017992895 = score(doc=480,freq=6.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.17366013 = fieldWeight in 480, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=480)
      0.25 = coord(2/8)
    
    Footnote
    Cole's reliance upon Donald's Theory of Mind is limiting; it represents a major weakness of the book. Donald's Theory of Mind has been an influential model in evolutionary psychology, appearing in his 1991 book Origins of the Modern Mind: Three Stages in the Evolution of Culture and Cognition (Harvard University Press). Donald's approach is a top-down, conceptual model that explicates what makes the human mind different and exceptional from other animal intelligences. However, there are other alternative, useful, science-based models of animal and human cognition that begin with a bottom-up approach to understanding the building blocks of cognition shared in common by humans and other "intelligent" animals. For example, in "A Bottom-Up Approach to the Primate Mind," Frans B.M. de Waal and Pier Francesco Ferrari note that neurophysiological studies show that specific neuron assemblies in the rat hippocampus are active during memory retrieval and that those same assemblies predict future choices. This would suggest that episodic memory and future orientation aren't as advanced a process as Donald posits in his Theory of Mind. Also, neuroimaging studies in humans show that the cortical areas active during observations of another's actions are related in position and structure to those areas identified as containing mirror neurons in macaques. Could this point to a physiological basis for imitation? ... (Scott Curtis)"
    LCSH
    Information Storage and Retrieval
    Subject
    Information Storage and Retrieval
  10. Stratton, B.: ¬The transiency of CD-ROM? : A reappraisal for the 1990s (1994) 0.02
    0.016132783 = product of:
      0.06453113 = sum of:
        0.044942953 = weight(_text_:storage in 8580) [ClassicSimilarity], result of:
          0.044942953 = score(doc=8580,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.24080718 = fieldWeight in 8580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.03125 = fieldNorm(doc=8580)
        0.019588182 = weight(_text_:retrieval in 8580) [ClassicSimilarity], result of:
          0.019588182 = score(doc=8580,freq=4.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.18905719 = fieldWeight in 8580, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=8580)
      0.25 = coord(2/8)
    
    Abstract
    In an earlier article, Tony McSean and Derek Law questioned the merits of CD-ROM as a technology and suggested that CD-ROM is likely to be supplanted by future technologies (Library Association Record 92(1990) no.11, S.837-838,841). Reexamines the hypothesis in light of subsequent developments. Compares CD-ROM with online information retrieval noting in the case of CD-ROM databases: lack of telecommunication costs; need for better training; poorer currency; relative search costs. The poorer currency of CD-ROM may be solved by providing complementary online and CD-ROM services. Discusses the provision of parallel printed and CD-ROM versions (and in the case of some databases such as LISA online versions as well), networking issues, and the impact of CD-ROM on printed products possibly leading to the extinction of printed services with catastrophic effects on the printing industry and for people without the means of sccessing CD-ROM databases. McSean and Law failed to predict the revolution in personal computers and to appreciate the effect of networking capabilities. The perceived problem of CD-ROM disc capacity and low retrieval speeds still remain for very large databases. Considers current applications of CD-ROM for electronic libraries and for document and database delivery, particularly over networks such as JANET. Concludes that McSean and Law's assertion that CD-ROM is a transient technology is quite correct in that all information technology has proved to be transient. However, CD-ROM has proved useful attractive alternative to online databases. CD-ROM may bring about its own demise as a storage medium simply because it is efficacious as a delivery medium and if transferred to magnetic media the databases are more efficiently and speedily accessible and more easily networked in local area networks (LANs) or wide area networks (WANs), or nationally as in the BIDS service over JANET
  11. Aldana, J.F.; Gómez, A.C.; Moreno, N.; Nebro, A.J.; Roldán, M.M.: Metadata functionality for semantic Web integration (2003) 0.02
    0.016132783 = product of:
      0.06453113 = sum of:
        0.044942953 = weight(_text_:storage in 2731) [ClassicSimilarity], result of:
          0.044942953 = score(doc=2731,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.24080718 = fieldWeight in 2731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
        0.019588182 = weight(_text_:retrieval in 2731) [ClassicSimilarity], result of:
          0.019588182 = score(doc=2731,freq=4.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.18905719 = fieldWeight in 2731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
      0.25 = coord(2/8)
    
    Abstract
    We propose an extension of a mediator architecture. This extension is oriented to ontology-driven data integration. In our architecture ontologies are not managed by an extemal component or service, but are integrated in the mediation layer. This approach implies rethinking the mediator design, but at the same time provides advantages from a database perspective. Some of these advantages include the application of optimization and evaluation techniques that use and combine information from all abstraction levels (physical schema, logical schema and semantic information defined by ontology). 1. Introduction Although the Web is probably the richest information repository in human history, users cannot specify what they want from it. Two major problems that arise in current search engines (Heflin, 2001) are: a) polysemy, when the same word is used with different meanings; b) synonymy, when two different words have the same meaning. Polysemy causes irrelevant information retrieval. On the other hand, synonymy produces lost of useful documents. The lack of a capability to understand the context of the words and the relationships among required terms, explains many of the lost and false results produced by search engines. The Semantic Web will bring structure to the meaningful content of Web pages, giving semantic relationships among terms and possibly avoiding the previous problems. Various proposals have appeared for meta-data representation and communication standards, and other services and tools that may eventually merge into the global Semantic Web (Berners-lee, 2001). Hopefully, in the next few years we will see the universal adoption of open standards for representation and sharing of meta-information. In this environment, software agents roaming from page to page can readily carry out sophisticated tasks for users (Berners-Lee, 2001). In this context, ontologies can be seen as metadata that represent semantic of data; providing a knowledge domain standard vocabulary, like DTDs and XML Schema do. If its pages were so structured, the Web could be seen as a heterogeneous collection of autonomous databases. This suggests that techniques developed in the Database area could be useful. Database research mainly deals with efficient storage and retrieval and with powerful query languages.
  12. Rademaker, C.A.: ¬The classification of ornamental designs in the United States Patent Classification System (2000) 0.01
    0.014698472 = product of:
      0.058793887 = sum of:
        0.044942953 = weight(_text_:storage in 130) [ClassicSimilarity], result of:
          0.044942953 = score(doc=130,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.24080718 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.03125 = fieldNorm(doc=130)
        0.013850937 = weight(_text_:retrieval in 130) [ClassicSimilarity], result of:
          0.013850937 = score(doc=130,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.13368362 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=130)
      0.25 = coord(2/8)
    
    Abstract
    Industrial design is the professional discipline that creates pleasing, aesthetic shapes and appearances for mass-produced articles and consumer goods. This profession has grown in importance in recent decades as the science of introducing innovative ornamental designs for mass-produced articles has developed into a highly successful marketing tool. Businesses have recognized that profits can be enhanced by the introduction of improved ornamental designs. Consequently, in the modern market place an enormous variety of innovative ornamental designs is available to consumers for practically all articles of mass manufacture including vehicles, furniture, packaging, communication devices, luggage and apparel. As the availability of consumer goods becomes commonplace throughout the world, the profession of industrial design is becoming increasingly active and important. The significance of innovative ornamental designs to the field of business commerce has been long-recognized by intellectual property offices throughout the world. Most governments have established statutory means that permit designers to protect new ornamental designs. However, as large numbers of ornamental designs are granted statutory protection, intellectual property offices are faced with the challenge of organizing the images of these designs in a format that permits efficient access and decimation. The classification of ornamental images is a difficult problem: as the number of new designs produced each year grows, efficient access to protected designs has become an increasingly complicated task. The United States Patent and Trademark Office (USPTO) has provided statutory protection for ornamental designs since 1844. To date, more than 400,000 industrial designs havebeen granted patented statutory protection which is known as a design patent. In order to permit efficient storage and retrieval to the collection of design patents, the USPTO has developed a classification system that is directed to ornamental appearance of industrial designs
  13. Lim, S.C.J.; Liu, Y.; Lee, W.B.: ¬A methodology for building a semantically annotated multi-faceted ontology for product family modelling (2011) 0.01
    0.014698472 = product of:
      0.058793887 = sum of:
        0.044942953 = weight(_text_:storage in 1485) [ClassicSimilarity], result of:
          0.044942953 = score(doc=1485,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.24080718 = fieldWeight in 1485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.03125 = fieldNorm(doc=1485)
        0.013850937 = weight(_text_:retrieval in 1485) [ClassicSimilarity], result of:
          0.013850937 = score(doc=1485,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.13368362 = fieldWeight in 1485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1485)
      0.25 = coord(2/8)
    
    Abstract
    Product family design is one of the prevailing approaches in realizing mass customization. With the increasing number of product offerings targeted at different market segments, the issue of information management in product family design, that is related to an efficient and effective storage, sharing and timely retrieval of design information, has become more complicated and challenging. Product family modelling schema reported in the literature generally stress the component aspects of a product family and its analysis, with a limited capability to model complex inter-relations between physical components and other required information in different semantic orientations, such as manufacturing, material and marketing wise. To tackle this problem, ontology-based representation has been identified as a promising solution to redesign product platforms especially in a semantically rich environment. However, ontology development in design engineering demands a great deal of time commitment and human effort to process complex information. When a large variety of products are available, particularly in the consumer market, a more efficient method for building a product family ontology with the incorporation of multi-faceted semantic information is therefore highly desirable. In this study, we propose a methodology for building a semantically annotated multi-faceted ontology for product family modelling that is able to automatically suggest semantically-related annotations based on the design and manufacturing repository. The six steps of building such ontology: formation of product family taxonomy; extraction of entities; faceted unit generation and concept identification; facet modelling and semantic annotation; formation of a semantically annotated multi-faceted product family ontology (MFPFO); and ontology validation and evaluation are discussed in detail. Using a family of laptop computers as an illustrative example, we demonstrate how our methodology can be deployed step by step to create a semantically annotated MFPFO. Finally, we briefly discuss future research issues as well as interesting applications that can be further pursued based on the MFPFO developed.
  14. RDA: Resource Description and Access Print (2014) 0.01
    0.014698472 = product of:
      0.058793887 = sum of:
        0.044942953 = weight(_text_:storage in 2049) [ClassicSimilarity], result of:
          0.044942953 = score(doc=2049,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.24080718 = fieldWeight in 2049, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.03125 = fieldNorm(doc=2049)
        0.013850937 = weight(_text_:retrieval in 2049) [ClassicSimilarity], result of:
          0.013850937 = score(doc=2049,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.13368362 = fieldWeight in 2049, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2049)
      0.25 = coord(2/8)
    
    Abstract
    Designed for the digital world and an expanding universe of metadata users, RDA: Resource Description and Access is the new, unified cataloguing standard. Benefits of RDA include: - A structure based on the conceptual models of FRBR (functional requirements for bibliographic data) and FRAD (functional requirements for authority data) to help catalog users find the information they need more easily - A flexible framework for content description of digital resources that also serves the needs of libraries organizing traditional resources - A better fit with emerging technologies, enabling institutions to introduce efficiencies in data capture and storage retrieval. The online RDA Toolkit provides a one-stop resource for evaluating and implementing RDA, and is the most effective way to interact with the new standard. It includes searchable and browseable RDA instructions; two views of RDA content, by table of contents and by element set; user-created and sharable Workflows and Mappings-tools to customize RDA to support your organization's training, internal processes, and local policies; Library of Congress-Program for Cooperative Cataloging Policy Statements (LC-PCC PS) and links to other relevant cataloguing resources; and the full text of AACR2 with links to RDA. This full-text print version of RDA offers a snapshot that serves as an offline access point to help solo and part-time cataloguers evaluate RDA, as well as to support training and classroom use in any size institution. An index is included. The online RDA Toolkit includes PDFs, but purchasing the print version offers a convenient, time-saving option.
  15. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.01
    0.014698472 = product of:
      0.058793887 = sum of:
        0.044942953 = weight(_text_:storage in 117) [ClassicSimilarity], result of:
          0.044942953 = score(doc=117,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.24080718 = fieldWeight in 117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.03125 = fieldNorm(doc=117)
        0.013850937 = weight(_text_:retrieval in 117) [ClassicSimilarity], result of:
          0.013850937 = score(doc=117,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.13368362 = fieldWeight in 117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=117)
      0.25 = coord(2/8)
    
    Abstract
    Information management in corporate environments is a growing problem as companies' information assets grow and their need to use them in their operations. Several management models have been practiced with application on the most diverse fronts, practices that integrate the so-called Enterprise Content Management. This study proposes a conceptual model of semantic corporate information ecosystem, based on the Universal Document Model proposed by Dagobert Soergel. It focuses on unstructured information objects, especially multimedia, increasingly used in corporate environments, adding semantics and expanding their recovery potential in the composition and reuse of dynamic documents on demand. The proposed model considers stable elements in the organizational environment, such as actors, processes, business metadata and information objects, as well as some basic infrastructures of the corporate information environment. The main objective is to establish a conceptual model that adds semantic intelligence to information assets, leveraging pre-existing infrastructure in organizations, integrating and relating objects to other objects, actors and business processes. The approach methodology considered the state of the art of Information Organization, Representation and Retrieval, Organizational Content Management and Semantic Web technologies, in the scientific literature, as bases for the establishment of an integrative conceptual model. Therefore, the research will be qualitative and exploratory. The predicted steps of the model are: Environment, Data Type and Source Definition, Data Distillation, Metadata Enrichment, and Storage. As a result, in theoretical terms the extended model allows to process heterogeneous and unstructured data according to the established cut-outs and through the processes listed above, allowing value creation in the composition of dynamic information objects, with semantic aggregations to metadata.
  16. Langville, A.N.; Meyer, C.D.: Google's PageRank and beyond : the science of search engine rankings (2006) 0.01
    0.014233985 = product of:
      0.05693594 = sum of:
        0.033707213 = weight(_text_:storage in 6) [ClassicSimilarity], result of:
          0.033707213 = score(doc=6,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.18060538 = fieldWeight in 6, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6)
        0.023228727 = weight(_text_:retrieval in 6) [ClassicSimilarity], result of:
          0.023228727 = score(doc=6,freq=10.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.22419426 = fieldWeight in 6, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6)
      0.25 = coord(2/8)
    
    Content
    Inhalt: Chapter 1. Introduction to Web Search Engines: 1.1 A Short History of Information Retrieval - 1.2 An Overview of Traditional Information Retrieval - 1.3 Web Information Retrieval Chapter 2. Crawling, Indexing, and Query Processing: 2.1 Crawling - 2.2 The Content Index - 2.3 Query Processing Chapter 3. Ranking Webpages by Popularity: 3.1 The Scene in 1998 - 3.2 Two Theses - 3.3 Query-Independence Chapter 4. The Mathematics of Google's PageRank: 4.1 The Original Summation Formula for PageRank - 4.2 Matrix Representation of the Summation Equations - 4.3 Problems with the Iterative Process - 4.4 A Little Markov Chain Theory - 4.5 Early Adjustments to the Basic Model - 4.6 Computation of the PageRank Vector - 4.7 Theorem and Proof for Spectrum of the Google Matrix Chapter 5. Parameters in the PageRank Model: 5.1 The a Factor - 5.2 The Hyperlink Matrix H - 5.3 The Teleportation Matrix E Chapter 6. The Sensitivity of PageRank; 6.1 Sensitivity with respect to alpha - 6.2 Sensitivity with respect to H - 6.3 Sensitivity with respect to vT - 6.4 Other Analyses of Sensitivity - 6.5 Sensitivity Theorems and Proofs Chapter 7. The PageRank Problem as a Linear System: 7.1 Properties of (I - alphaS) - 7.2 Properties of (I - alphaH) - 7.3 Proof of the PageRank Sparse Linear System Chapter 8. Issues in Large-Scale Implementation of PageRank: 8.1 Storage Issues - 8.2 Convergence Criterion - 8.3 Accuracy - 8.4 Dangling Nodes - 8.5 Back Button Modeling
    Chapter 9. Accelerating the Computation of PageRank: 9.1 An Adaptive Power Method - 9.2 Extrapolation - 9.3 Aggregation - 9.4 Other Numerical Methods Chapter 10. Updating the PageRank Vector: 10.1 The Two Updating Problems and their History - 10.2 Restarting the Power Method - 10.3 Approximate Updating Using Approximate Aggregation - 10.4 Exact Aggregation - 10.5 Exact vs. Approximate Aggregation - 10.6 Updating with Iterative Aggregation - 10.7 Determining the Partition - 10.8 Conclusions Chapter 11. The HITS Method for Ranking Webpages: 11.1 The HITS Algorithm - 11.2 HITS Implementation - 11.3 HITS Convergence - 11.4 HITS Example - 11.5 Strengths and Weaknesses of HITS - 11.6 HITS's Relationship to Bibliometrics - 11.7 Query-Independent HITS - 11.8 Accelerating HITS - 11.9 HITS Sensitivity Chapter 12. Other Link Methods for Ranking Webpages: 12.1 SALSA - 12.2 Hybrid Ranking Methods - 12.3 Rankings based on Traffic Flow Chapter 13. The Future of Web Information Retrieval: 13.1 Spam - 13.2 Personalization - 13.3 Clustering - 13.4 Intelligent Agents - 13.5 Trends and Time-Sensitive Search - 13.6 Privacy and Censorship - 13.7 Library Classification Schemes - 13.8 Data Fusion Chapter 14. Resources for Web Information Retrieval: 14.1 Resources for Getting Started - 14.2 Resources for Serious Study Chapter 15. The Mathematics Guide: 15.1 Linear Algebra - 15.2 Perron-Frobenius Theory - 15.3 Markov Chains - 15.4 Perron Complementation - 15.5 Stochastic Complementation - 15.6 Censoring - 15.7 Aggregation - 15.8 Disaggregation
  17. Booth, P.F.: Indexing : the manual of good practice (2001) 0.01
    0.007349236 = product of:
      0.029396944 = sum of:
        0.022471476 = weight(_text_:storage in 1968) [ClassicSimilarity], result of:
          0.022471476 = score(doc=1968,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.12040359 = fieldWeight in 1968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.015625 = fieldNorm(doc=1968)
        0.0069254683 = weight(_text_:retrieval in 1968) [ClassicSimilarity], result of:
          0.0069254683 = score(doc=1968,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.06684181 = fieldWeight in 1968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.015625 = fieldNorm(doc=1968)
      0.25 = coord(2/8)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.7, S.440-442 (R. Fugmann): "Das Buch beginnt mit dem Kapitel "Myths about Indexing" und mit der Nennung von weit verbreiteten Irrtümern über das Indexieren, und zwar vorrangig über das Registermachen. Mit einem einzigen Satz ist die Problematik treffend skizziert, welcher das Buch gewidmet ist: "With the development of electronic documents, it has become possible to store very large amounts of information; but storage is not of much use without the capability to retrieve, to convert, transfer and reuse the information". Kritisiert wird die weit verbreitet anzutreffende Ansicht, das Indexieren sei lediglich eine Sache vom "picking out words from the text or naming objects in images and using those words as index headings". Eine solche Arbeitsweise führt jedoch nicht zu Registern, sondern zu Konkordanzen (d.h. zu alphabetischen Fundstellenlisten für Textwörter) und"... is entirely dependent an the words themselves and is not concerned with the ideas behind them". Das Sammeln von Information ist einfach. Aber die (Wieder-) Auffindbarkeit herzustellen muss gelernt werden, wenn mehr ermöglicht werden soll als lediglich das Wiederfinden von Texten, die man in allen Einzelheiten noch genau in Erinnerung behalten hat (known-item searches, questions of recall), die Details der sprachlichen Ausdrucksweise für die gesuchten Begriffe eingeschlossen. Die Verfasserin beschreibt aus ihrer großen praktischen Erfahrung, welche Schritte hierzu auf der gedanklichen und technischen Ebene unternommen werden müssen. Zu den erstgenannten Schritten rechnet die Abtrennung von Details, welche nicht im Index vertreten sein sollten ("unsought terms"), weil sie mit Sicherheit kein Suchziel darstellen werden und als "false friends" zur Überflutung des Suchenden mit Nebensächlichkeiten führen würden, eine Entscheidung, welche nur mit guter Sachkenntnis gefällt werden kann. All Dasjenige hingegen, was in Gegenwart und Zukunft (!) ein sinnvolles Suchziel darstellen könnte und "sufficiently informative" ist, verdient ein Schlagwort im Register. Man lernt auch durch lehrreiche Beispiele, wodurch ein Textwort unbrauchbar für das Register wird, wenn es dort als (schlechtes) Schlagwort erscheint, herausgelöst aus dem interpretierenden Zusammenhang, in welchen es im Text eingebettet gewesen ist. Auch muss die Vieldeutigkeit bereinigt werden, die fast jedem natursprachigen Wort anhaftet. Sonst wird der Suchende beim Nachschlagen allzu oft in die Irre geführt, und zwar um so öfter, je größer ein diesbezüglich unbereinigter Speicher bereits geworden ist.
    Der Zugang zum Informationsspeicher ist auch von verwandten Begriffen her zu gewährleisten, denn der Suchende lässt sich gern mit seiner Fragestellung zu allgemeineren und vor allem zu spezifischeren Begriffen leiten. Verweisungen der Art "siehe auch" dienen diesem Zweck. Der Zugang ist auch von unterschiedlichen, aber bedeutungsgleichen Ausdrücken mithilfe einer Verweisung von der Art "siehe" zu gewährleisten, denn ein Fragesteller könnte sich mit einem von diesen Synonymen auf die Suche begeben haben und würde dann nicht fündig werden. Auch wird Vieles, wofür ein Suchender sein Schlagwort parat hat, in einem Text nur in wortreicher Umschreibung und paraphrasiert angetroffen ("Terms that may not appear in the text but are likely to be sought by index users"), d.h. praktisch unauffindbar in einer derartig mannigfaltigen Ausdrucksweise. All dies sollte lexikalisch ausgedrückt werden, und zwar in geläufiger Terminologie, denn in dieser Form erfolgt auch die Fragestellung. Hier wird die Grenze zwischen "concept indexing" gegenüber dem bloßen "word indexing" gezogen, welch letzteres sich mit der Präsentation von nicht interpretierten Textwörtern begnügt. Nicht nur ist eine solche Grenze weit verbreitet unbekannt, ihre Existenz wird zuweilen sogar bestritten, obwohl doch ein Wort meistens viele Begriffe ausdrückt und obwohl ein Begriff meistens durch viele verschiedene Wörter und Sätze ausgedrückt wird. Ein Autor kann und muss sich in seinen Texten oft mit Andeutungen begnügen, weil ein Leser oder Zuhörer das Gemeinte schon aus dem Zusammenhang erkennen kann und nicht mit übergroßer Deutlichkeit (spoon feeding) belästigt sein will, was als Unterstellung von Unkenntnis empfunden würde. Für das Retrieval hingegen muss das Gemeinte explizit ausgedrückt werden. In diesem Buch wird deutlich gemacht, was alles an außertextlichem und Hintergrund-Wissen für ein gutes Indexierungsergebnis aufgeboten werden muss, dies auf der Grundlage von sachverständiger und sorgfältiger Interpretation ("The indexer must understand the meaning of a text"). All dies lässt gutes Indexieren nicht nur als professionelle Dienstleistung erscheinen, sondern auch als Kunst. Als Grundlage für all diese Schritte wird ein Thesaurus empfohlen, mit einem gut strukturierten Netzwerk von verwandtschaftlichen Beziehungen und angepasst an den jeweiligen Buchtext. Aber nur selten wird man auf bereits andernorts vorhandene Thesauri zurückgreifen können. Hier wäre ein Hinweis auf einschlägige Literatur zur Thesaurus-Konstruktion nützlich gewesen.

Authors

Languages

Types

  • a 229
  • m 92
  • s 33
  • el 5
  • x 2
  • b 1
  • i 1
  • n 1
  • p 1
  • r 1
  • More… Less…

Subjects

Classifications