Search (14 results, page 1 of 1)

  • × theme_ss:"Konzeption und Anwendung des Prinzips Thesaurus"
  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"a"
  1. Mazzocchi, F.; Plini, P.: Refining thesaurus relational structure : implications and opportunities (2008) 0.02
    0.018403849 = product of:
      0.092019245 = sum of:
        0.092019245 = weight(_text_:section in 5448) [ClassicSimilarity], result of:
          0.092019245 = score(doc=5448,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.34981182 = fieldWeight in 5448, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.046875 = fieldNorm(doc=5448)
      0.2 = coord(1/5)
    
    Source
    Kompatibilität, Medien und Ethik in der Wissensorganisation - Compatibility, Media and Ethics in Knowledge Organization: Proceedings der 10. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Wien, 3.-5. Juli 2006 - Proceedings of the 10th Conference of the German Section of the International Society of Knowledge Organization Vienna, 3-5 July 2006. Ed.: H.P. Ohly, S. Netscher u. K. Mitgutsch
  2. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.02
    0.017673712 = product of:
      0.04418428 = sum of:
        0.03230309 = weight(_text_:on in 604) [ClassicSimilarity], result of:
          0.03230309 = score(doc=604,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.29462588 = fieldWeight in 604, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
        0.011881187 = weight(_text_:information in 604) [ClassicSimilarity], result of:
          0.011881187 = score(doc=604,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.13576832 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
      0.4 = coord(2/5)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
    Source
    Proceedings of the Sixth Workshop on Scripting and Development for the Semantic Web, Crete, Greece, May 31, 2010, CEUR Workshop Proceedings, SFSW - http://ceur-ws.org/Vol-699/Paper2.pdf
  3. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2010) 0.01
    0.014208074 = product of:
      0.035520185 = sum of:
        0.011881187 = weight(_text_:information in 4792) [ClassicSimilarity], result of:
          0.011881187 = score(doc=4792,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.13576832 = fieldWeight in 4792, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4792)
        0.023639 = product of:
          0.047278 = sum of:
            0.047278 = weight(_text_:22 in 4792) [ClassicSimilarity], result of:
              0.047278 = score(doc=4792,freq=2.0), product of:
                0.17456654 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049850095 = queryNorm
                0.2708308 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Moderne Verfahren des Information Retrieval verlangen nach aussagekräftigen und detailliert relationierten Dokumentationssprachen. Der selektive Transfer einzelner Modellierungsstrategien aus dem Bereich semantischer Technologien für die Gestaltung und Relationierung bestehender Dokumentationssprachen wird diskutiert. In Form einer Taxonomie wird ein hierarchisch strukturiertes Relationeninventar definiert, welches sowohl hinreichend allgemeine als auch zahlreiche spezifische Relationstypen enthält, die eine detaillierte und damit aussagekräftige Relationierung des Vokabulars ermöglichen. Das bringt einen Zugewinn an Übersichtlichkeit und Funktionalität. Im Gegensatz zu anderen Ansätzen und Überlegungen zur Schaffung von Relationeninventaren entwickelt der vorgestellte Vorschlag das Relationeninventar aus der Begriffsmenge eines bestehenden Gegenstandsbereichs heraus.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  4. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.01
    0.014181092 = product of:
      0.03545273 = sum of:
        0.018650195 = weight(_text_:on in 572) [ClassicSimilarity], result of:
          0.018650195 = score(doc=572,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.17010231 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.016802534 = weight(_text_:information in 572) [ClassicSimilarity], result of:
          0.016802534 = score(doc=572,freq=4.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.1920054 = fieldWeight in 572, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
      0.4 = coord(2/5)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
  5. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.01
    0.013157635 = product of:
      0.032894086 = sum of:
        0.026104836 = weight(_text_:on in 4639) [ClassicSimilarity], result of:
          0.026104836 = score(doc=4639,freq=12.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.23809364 = fieldWeight in 4639, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
        0.0067892494 = weight(_text_:information in 4639) [ClassicSimilarity], result of:
          0.0067892494 = score(doc=4639,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.0775819 = fieldWeight in 4639, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4639)
      0.4 = coord(2/5)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
  6. Amirhosseini, M.; Avidan, G.: ¬A dialectic perspective on the evolution of thesauri and ontologies (2021) 0.01
    0.012624079 = product of:
      0.031560197 = sum of:
        0.023073634 = weight(_text_:on in 592) [ClassicSimilarity], result of:
          0.023073634 = score(doc=592,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.21044704 = fieldWeight in 592, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=592)
        0.0084865615 = weight(_text_:information in 592) [ClassicSimilarity], result of:
          0.0084865615 = score(doc=592,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.09697737 = fieldWeight in 592, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=592)
      0.4 = coord(2/5)
    
    Abstract
    The purpose of this article is to identify the most important factors and features in the evolution of thesauri and ontologies through a dialectic model. This model relies on a dialectic process or idea which could be discovered via a dialectic method. This method has focused on identifying the logical relationship between a beginning proposition, or an idea called a thesis, a negation of that idea called the antithesis, and the result of the conflict between the two ideas, called a synthesis. During the creation of knowl­edge organization systems (KOSs), the identification of logical relations between different ideas has been made possible through the consideration and use of the most influential methods and tools such as dictionaries, Roget's Thesaurus, thesaurus, micro-, macro- and metathesauri, ontology, lower, middle and upper level ontologies. The analysis process has adapted a historical methodology, more specifically a dialectic method and documentary method as the reasoning process. This supports our arguments and synthesizes a method for the analysis of research results. Confirmed by the research results, the principle of unity has shown to be the most important factor in the development and evolution of the structure of knowl­edge organization systems and their types. There are various types of unity when considering the analysis of logical relations. These include the principle of unity of alphabetical order, unity of science, semantic unity, structural unity and conceptual unity. The results have clearly demonstrated a movement from plurality to unity in the assembling of the complex structure of knowl­edge organization systems to increase information and knowl­edge storage and retrieval performance.
  7. Fagundes, P.B.; Freund, G.P.; Vital, L.P.; Monteiro de Barros, C.; Macedo, D.D.J.de: Taxonomias, ontologias e tesauros : possibilidades de contribuição para o processo de Engenharia de Requisitos (2020) 0.01
    0.012117877 = product of:
      0.03029469 = sum of:
        0.013321568 = weight(_text_:on in 5828) [ClassicSimilarity], result of:
          0.013321568 = score(doc=5828,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.121501654 = fieldWeight in 5828, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5828)
        0.016973123 = weight(_text_:information in 5828) [ClassicSimilarity], result of:
          0.016973123 = score(doc=5828,freq=8.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.19395474 = fieldWeight in 5828, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5828)
      0.4 = coord(2/5)
    
    Abstract
    Some of the fundamental activities of the software development process are related to the discipline of Requirements Engineering, whose objective is the discovery, analysis, documentation and verification of the requirements that will be part of the system. Requirements are the conditions or capabilities that software must have or perform to meet the users needs. The present study is being developed to propose a model of cooperation between Information Science and Requirements Engineering. Aims to present the analysis results on the possibilities of using the knowledge organization systems: taxonomies, thesauri and ontologies during the activities of Requirements Engineering: design, survey, elaboration, negotiation, specification, validation and requirements management. From the results obtained it was possible to identify in which stage of the Requirements Engineering process, each type of knowledge organization system could be used. We expect that this study put in evidence the need for new researchs and proposals to strengt the exchange between Information Science, as a science that has information as object of study, and the Requirements Engineering which has in the information the raw material to identify the informational needs of software users.
  8. Kless, D.; Milton, S.: Comparison of thesauri and ontologies from a semiotic perspective (2010) 0.01
    0.009936477 = product of:
      0.024841193 = sum of:
        0.0101838745 = weight(_text_:information in 756) [ClassicSimilarity], result of:
          0.0101838745 = score(doc=756,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.116372846 = fieldWeight in 756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=756)
        0.014657319 = product of:
          0.029314637 = sum of:
            0.029314637 = weight(_text_:technology in 756) [ClassicSimilarity], result of:
              0.029314637 = score(doc=756,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.19744103 = fieldWeight in 756, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=756)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    Preprint. To be published as Vol 122 in the Conferences in Research and Practice in Information Technology Series by the Australian Computer Society Inc. http://crpit.com/.
  9. Kless, D.; Milton, S.; Kazmierczak, E.; Lindenthal, J.: Thesaurus and ontology structure : formal and pragmatic differences and similarities (2015) 0.01
    0.008280397 = product of:
      0.020700993 = sum of:
        0.0084865615 = weight(_text_:information in 2036) [ClassicSimilarity], result of:
          0.0084865615 = score(doc=2036,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.09697737 = fieldWeight in 2036, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2036)
        0.012214432 = product of:
          0.024428863 = sum of:
            0.024428863 = weight(_text_:technology in 2036) [ClassicSimilarity], result of:
              0.024428863 = score(doc=2036,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.16453418 = fieldWeight in 2036, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2036)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.7, S.1348-1366
  10. Rolland-Thomas, P.: Thesaural codes : an appraisal of their use in the Library of Congress Subject Headings (1993) 0.01
    0.006624318 = product of:
      0.016560795 = sum of:
        0.0067892494 = weight(_text_:information in 549) [ClassicSimilarity], result of:
          0.0067892494 = score(doc=549,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.0775819 = fieldWeight in 549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=549)
        0.009771545 = product of:
          0.01954309 = sum of:
            0.01954309 = weight(_text_:technology in 549) [ClassicSimilarity], result of:
              0.01954309 = score(doc=549,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.13162735 = fieldWeight in 549, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.03125 = fieldNorm(doc=549)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    LCSH is known as such since 1975. It always has created headings to serve the LC collections instead of a theoretical basis. It started to replace cross reference codes by thesaural codes in 1986, in a mechanical fashion. It was in no way transformed into a thesaurus. Its encyclopedic coverage, its pre-coordinate concepts make it substantially distinct, considering that thesauri usually map a restricted field of knowledge and use uniterms. The questions raised are whether the new symbols comply with thesaurus standards and if they are true to one or to several models. Explanations and definitions from other lists of subject headings and thesauri, literature in the field of classification and subject indexing will provide some answers. For instance, see refers from a subject heading not used to another or others used. Exceptionally it will lead from a specific term to a more general one. Some equate a see reference with the equivalence relationship. Such relationships are pointed by USE in LCSH. See also references are made from the broader subject to narrower parts of it and also between associated subjects. They suggest lateral or vertical connexions as well as reciprocal relationships. They serve a coordination purpose for some, lay down a methodical search itinerary for others. Since their inception in the 1950's thesauri have been devised for indexing and retrieving information in the fields of science and technology. Eventually they attended to a number of social sciences and humanities. Research derived from thesauri was voluminous. Numerous guidelines are designed. They did not discriminate between the "hard" sciences and the social sciences. RT relationships are widely but diversely used in numerous controlled vocabularies. LCSH's aim is to achieve a list almost free of RT and SA references. It thus restricts relationships to BT/NT, USE and UF. This raises the question as to whether all fields of knowledge can "fit" in the Procrustean bed of RT/NT, i.e., genus/species relationships. Standard codes were devised. It was soon realized that BT/NT, well suited to the genus/species couple could not signal a whole-part relationship. In LCSH, BT and NT function as reciprocals, the whole-part relationship is taken into account by ISO. It is amply elaborated upon by authors. The part-whole connexion is sometimes studied apart. The decision to replace cross reference codes was an improvement. Relations can now be distinguished through the distinct needs of numerous fields of knowledge are not attended to. Topic inclusion, and topic-subtopic, could provide the missing link where genus/species or whole/part are inadequate. Distinct codes, BT/NT and whole/part, should be provided. Sorting relationships with mechanical means can only lead to confusion.
  11. Garshol, L.M.: Metadata? Thesauri? Taxonomies? Topic Maps! : making sense of it all (2005) 0.00
    0.004989059 = product of:
      0.024945294 = sum of:
        0.024945294 = weight(_text_:information in 4729) [ClassicSimilarity], result of:
          0.024945294 = score(doc=4729,freq=12.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.2850541 = fieldWeight in 4729, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4729)
      0.2 = coord(1/5)
    
    Abstract
    The task of an information architect is to create web sites where users can actually find the information they are looking for. As the ocean of information rises and leaves what we seek ever more deeply buried in what we don't seek, this discipline becomes ever more relevant. Information architecture involves many different aspects of web site creation and organization, but its principal tools are information organization techniques developed in other disciplines. Most of these techniques come from library science, such as thesauri, taxonomies, and faceted classification. Topic maps are a relative newcomer to this area and bring with them the promise of better-organized web sites, compared to what is possible with existing techniques. However, it is not generally understood how topic maps relate to the traditional techniques, and what advantages and disadvantages they have, compared to these techniques. The aim of this paper is to help build a better understanding of these issues.
    Source
    Journal of information science. 30(2005) no.4, S.378-391
  12. Amirhosseini, M.: Theoretical base of quantitative evaluation of unity in a thesaurus term network based on Kant's epistemology (2010) 0.00
    0.004614727 = product of:
      0.023073634 = sum of:
        0.023073634 = weight(_text_:on in 5854) [ClassicSimilarity], result of:
          0.023073634 = score(doc=5854,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.21044704 = fieldWeight in 5854, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5854)
      0.2 = coord(1/5)
    
    Abstract
    The quantitative evaluation of thesauri has been carried out much further since 1976. This type of evaluation is based on counting of special factors in thesaurus structure, some of which are counting preferred terms, non preferred terms, cross reference terms and so on. Therefore, various statistical tests have been proposed and applied for evaluation of thesauri. In this article, we try to explain some ratios in the field of unity quantitative evaluation in a thesaurus term network. Theoretical base of the ratios' indicators and indices construction, and epistemological thought in this type of quantitative evaluation, are discussed in this article. The theoretical base of quantitative evaluation is the epistemological thought of Immanuel Kant's Critique of pure reason. The cognition states of transcendental understanding are divided into three steps, the first is perception, the second combination and the third, relation making. Terms relation domains and conceptual relation domains can be analyzed with ratios. The use of quantitative evaluations in current research in the field of thesaurus construction prepares a basis for a restoration period. In modern thesaurus construction, traditional term relations are analyzed in detail in the form of new conceptual relations. Hence, the new domains of hierarchical and associative relations are constructed in the form of relations between concepts. The newly formed conceptual domains can be a suitable basis for quantitative evaluation analysis in conceptual relations.
  13. Fischer, D.H.: From thesauri towards ontologies? (1998) 0.00
    0.0045214905 = product of:
      0.022607451 = sum of:
        0.022607451 = weight(_text_:on in 2176) [ClassicSimilarity], result of:
          0.022607451 = score(doc=2176,freq=4.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.20619515 = fieldWeight in 2176, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2176)
      0.2 = coord(1/5)
    
    Abstract
    The ISO 2788 guidelines for monolingual thesauri contain a differentiation of "the hierarchical relationship" into "generic", "partitive", and "instance", which, for purposes of document retrieval, was deemed adequate. However, ontologies, designed as language inventories for a wider scope of knowledge representation, are based on all these and some more logical differentiations. Rereading the ISO 2788 standard and inspecting the published Cyc Upper Ontology, it is argued that the adoption of the document-retrieval definition of subsumption generally prevents the conception or use of a thesaurus as a substructure of an ontology of the new kind as constructed for AI applications. When a thesaurus is used for fact description and inference on fact descriptions, the instance-of relationship too should be reconsidered: It may also link concepts and metaconcepts, and then its distinction from subsumption is needed. The treatment of the instance-of relationship in thesauri, the Cyc Upper Ontology, and WordNet is described from this perspective
  14. Amirhosseini, M.: Quantitative evaluation of the movement from complexity toward simplicity in the structure of thesaurus descriptors (2015) 0.00
    0.0024003622 = product of:
      0.012001811 = sum of:
        0.012001811 = weight(_text_:information in 3695) [ClassicSimilarity], result of:
          0.012001811 = score(doc=3695,freq=4.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.13714671 = fieldWeight in 3695, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3695)
      0.2 = coord(1/5)
    
    Abstract
    The concepts of simplicity and complexity play major roles in information storage and retrieval in knowledge organizations. This paper reports an investigation of these concepts in the structure of descriptors. The main purpose of simplicity is to decrease the number of words in the construction of descriptors as this idea affects semantic relations, recall and precision. ISO 25964 has affirmed the purpose of simplicity by requiring splitting compound terms into simpler concepts. This work aims to elaborate the standard methods of evaluation by providing a more detailed evaluation of the descriptors structure and identifying effective factors in simplicity and complexity results in the structure of thesauri descriptors. The research population is taken from the descriptors of the Commonwealth Agricultural Bureaux (CAB) Thesaurus, the Persian Cultural Thesaurus (ASFA) and the Chemical Thesaurus. This research was conducted using the statistical and content analysis method. In this research we propose a new quantitative approach as well as novel indicators and indices involving Simplicity and Factoring Ratios to evaluate the descriptors structure. The results will be useful in the verification, selection and maintenance purposes in knowledge organizations and the inquiry method can be further developed in the field of ontology evaluation.
    Source
    Malaysian journal of library and information science. 20(2015), no.3, S.47-62