Search (77 results, page 1 of 4)

  • × theme_ss:"Semantische Interoperabilität"
  1. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.06
    0.056794487 = product of:
      0.08519173 = sum of:
        0.07172854 = weight(_text_:systematic in 168) [ClassicSimilarity], result of:
          0.07172854 = score(doc=168,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.2525906 = fieldWeight in 168, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.013463181 = product of:
          0.026926363 = sum of:
            0.026926363 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.026926363 = score(doc=168,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  2. Cheng, Y.-Y.; Xia, Y.: ¬A systematic review of methods for aligning, mapping, merging taxonomies in information sciences (2023) 0.05
    0.05176563 = product of:
      0.15529688 = sum of:
        0.15529688 = weight(_text_:systematic in 1029) [ClassicSimilarity], result of:
          0.15529688 = score(doc=1029,freq=6.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.54687476 = fieldWeight in 1029, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1029)
      0.33333334 = coord(1/3)
    
    Abstract
    The purpose of this study is to provide a systematic literature review on taxonomy alignment methods in information science to explore the common research pipeline and characteristics. Design/methodology/approach The authors implement a five-step systematic literature review process relating to taxonomy alignment. They take on a knowledge organization system (KOS) perspective, and specifically examining the level of KOS on "taxonomies." Findings They synthesize the matching dimensions of 28 taxonomy alignment studies in terms of the taxonomy input, approach and output. In the input dimension, they develop three characteristics: tree shapes, variable names and symmetry; for approach: methodology, unit of matching, comparison type and relation type; for output: the number of merged solutions and whether original taxonomies are preserved in the solutions. Research limitations/implications The main research implications of this study are threefold: (1) to enhance the understanding of the characteristics of a taxonomy alignment work; (2) to provide a novel categorization of taxonomy alignment approaches into natural language processing approach, logic-based approach and heuristic-based approach; (3) to provide a methodological guideline on the must-include characteristics for future taxonomy alignment research. Originality/value There is no existing comprehensive review on the alignment of "taxonomies". Further, no other mapping survey research has discussed the comparison from a KOS perspective. Using a KOS lens is critical in understanding the broader picture of what other similar systems of organizations are, and enables us to define taxonomies more precisely.
  3. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.03
    0.030688185 = product of:
      0.09206455 = sum of:
        0.09206455 = product of:
          0.27619365 = sum of:
            0.27619365 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.27619365 = score(doc=306,freq=2.0), product of:
                0.4212274 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049684696 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  4. Lumsden, J.; Hall, H.; Cruickshank, P.: Ontology definition and construction, and epistemological adequacy for systems interoperability : a practitioner analysis (2011) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 4801) [ClassicSimilarity], result of:
          0.08966068 = score(doc=4801,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 4801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4801)
      0.33333334 = coord(1/3)
    
    Abstract
    Ontology development is considered to be a useful approach to the design and implementation of interoperable systems. This literature review and commentary examines the current state of knowledge in this field with particular reference to processes involved in assuring epistemological adequacy. It takes the perspective of the information systems practitioner keen to adopt a systematic approach to in-house ontology design, taking into consideration previously published work. The study arises from author involvement in an integration/interoperability project on systems that support Scottish Common Housing Registers in which, ultimately, ontological modelling was not deployed. Issues concerning the agreement of meaning, and the implications for the creation of interoperable systems, are discussed. The extent to which those theories, methods and frameworks provide practitioners with a usable set of tools is explored, and examples of practical applications of ontological modelling are noted. The findings from the review of the literature demonstrate a number of difficulties faced by information systems practitioners keen to develop and deploy domain ontologies. A major problem is deciding which broad approach to take: to rely on automatic ontology construction techniques, or to rely on key words and domain experts to develop ontologies.
  5. Euzenat, J.; Meilicke, C.; Stuckenschmidt, H.; Shvaiko, P.; Trojahn, C.: Ontology alignment evaluation initiative : six years of experience (2011) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 161) [ClassicSimilarity], result of:
          0.08966068 = score(doc=161,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=161)
      0.33333334 = coord(1/3)
    
    Abstract
    In the area of semantic technologies, benchmarking and systematic evaluation is not yet as established as in other areas of computer science, e.g., information retrieval. In spite of successful attempts, more effort and experience are required in order to achieve such a level of maturity. In this paper, we report results and lessons learned from the Ontology Alignment Evaluation Initiative (OAEI), a benchmarking initiative for ontology matching. The goal of this work is twofold: on the one hand, we document the state of the art in evaluating ontology matching methods and provide potential participants of the initiative with a better understanding of the design and the underlying principles of the OAEI campaigns. On the other hand, we report experiences gained in this particular area of semantic technologies to potential developers of benchmarking for other kinds of systems. For this purpose, we describe the evaluation design used in the OAEI campaigns in terms of datasets, evaluation criteria and workflows, provide a global view on the results of the campaigns carried out from 2005 to 2010 and discuss upcoming trends, both specific to ontology matching and generally relevant for the evaluation of semantic technologies. Finally, we argue that there is a need for a further automation of benchmarking to shorten the feedback cycle for tool developers.
  6. Peponakis, M.; Mastora, A.; Kapidakis, S.; Doerr, M.: Expressiveness and machine processability of Knowledge Organization Systems (KOS) : an analysis of concepts and relations (2020) 0.03
    0.029886894 = product of:
      0.08966068 = sum of:
        0.08966068 = weight(_text_:systematic in 5787) [ClassicSimilarity], result of:
          0.08966068 = score(doc=5787,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 5787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5787)
      0.33333334 = coord(1/3)
    
    Abstract
    This study considers the expressiveness (that is the expressive power or expressivity) of different types of Knowledge Organization Systems (KOS) and discusses its potential to be machine-processable in the context of the Semantic Web. For this purpose, the theoretical foundations of KOS are reviewed based on conceptualizations introduced by the Functional Requirements for Subject Authority Data (FRSAD) and the Simple Knowledge Organization System (SKOS); natural language processing techniques are also implemented. Applying a comparative analysis, the dataset comprises a thesaurus (Eurovoc), a subject headings system (LCSH) and a classification scheme (DDC). These are compared with an ontology (CIDOC-CRM) by focusing on how they define and handle concepts and relations. It was observed that LCSH and DDC focus on the formalism of character strings (nomens) rather than on the modelling of semantics; their definition of what constitutes a concept is quite fuzzy, and they comprise a large number of complex concepts. By contrast, thesauri have a coherent definition of what constitutes a concept, and apply a systematic approach to the modelling of relations. Ontologies explicitly define diverse types of relations, and are by their nature machine-processable. The paper concludes that the potential of both the expressiveness and machine processability of each KOS is extensively regulated by its structural rules. It is harder to represent subject headings and classification schemes as semantic networks with nodes and arcs, while thesauri are more suitable for such a representation. In addition, a paradigm shift is revealed which focuses on the modelling of relations between concepts, rather than the concepts themselves.
  7. Boteram, F.; Hubrich, J.: Specifying intersystem relations : requirements, strategies, and issues (2010) 0.03
    0.029550051 = product of:
      0.08865015 = sum of:
        0.08865015 = sum of:
          0.048260607 = weight(_text_:indexing in 3691) [ClassicSimilarity], result of:
            0.048260607 = score(doc=3691,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.2537542 = fieldWeight in 3691, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.046875 = fieldNorm(doc=3691)
          0.04038954 = weight(_text_:22 in 3691) [ClassicSimilarity], result of:
            0.04038954 = score(doc=3691,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.23214069 = fieldWeight in 3691, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3691)
      0.33333334 = coord(1/3)
    
    Abstract
    Ideally, intersystem relations complement highly expressive and thoroughly structured relational indexing languages. The relational structures of the participating systems contribute to the meaning of the individual terms or classes. When conceptualizing mapping relations the structural and functional design of the respective systems must be fully taken into account. As intersystem relations may differ considerably from familiar interconcept relations, the creation of an adequate inventory that is general in coverage and specific in depth demands a deep understanding of the requirements and properties of mapping relations. The characteristics of specific mapping relations largely rely on the characteristics of the systems they are intended to connect. The detailed declaration of differences and peculiarities of specific mapping relations is an important prerequisite for modelling these relations. First approaches towards specifying
    Date
    22. 7.2010 17:11:51
  8. Ahn, J.-w.; Soergel, D.; Lin, X.; Zhang, M.: Mapping between ARTstor terms and the Getty Art and Architecture Thesaurus (2014) 0.03
    0.029550051 = product of:
      0.08865015 = sum of:
        0.08865015 = sum of:
          0.048260607 = weight(_text_:indexing in 1421) [ClassicSimilarity], result of:
            0.048260607 = score(doc=1421,freq=2.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.2537542 = fieldWeight in 1421, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.046875 = fieldNorm(doc=1421)
          0.04038954 = weight(_text_:22 in 1421) [ClassicSimilarity], result of:
            0.04038954 = score(doc=1421,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.23214069 = fieldWeight in 1421, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1421)
      0.33333334 = coord(1/3)
    
    Abstract
    To make better use of knowledge organization systems (KOS) for query expansion, we have developed a pattern-based technique for composition ontology mapping in a specific domain. The technique was tested in a two-step mapping. The user's free-text queries were first mapped to Getty's Art & Architecture Thesaurus (AAT) terms. The AAT-based queries were then mapped to a search engine's indexing vocabulary (ARTstor terms). The result indicated that our technique has improved the mapping success rate from 40% to 70%. We discuss also how the technique may be applied to other KOS mapping and how it may be implemented in practical systems.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  9. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.02
    0.02192013 = product of:
      0.06576039 = sum of:
        0.06576039 = product of:
          0.19728117 = sum of:
            0.19728117 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.19728117 = score(doc=1000,freq=2.0), product of:
                0.4212274 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049684696 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  10. Zhang, X.: Concept integration of document databases using different indexing languages (2006) 0.02
    0.019702308 = product of:
      0.059106924 = sum of:
        0.059106924 = product of:
          0.11821385 = sum of:
            0.11821385 = weight(_text_:indexing in 962) [ClassicSimilarity], result of:
              0.11821385 = score(doc=962,freq=12.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6215682 = fieldWeight in 962, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=962)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    An integrated information retrieval system generally contains multiple databases that are inconsistent in terms of their content and indexing. This paper proposes a rough set-based transfer (RST) model for integration of the concepts of document databases using various indexing languages, so that users can search through the multiple databases using any of the current indexing languages. The RST model aims to effectively create meaningful transfer relations between the terms of two indexing languages, provided a number of documents are indexed with them in parallel. In our experiment, the indexing concepts of two databases respectively using the Thesaurus of Social Science (IZ) and the Schlagwortnormdatei (SWD) are integrated by means of the RST model. Finally, this paper compares the results achieved with a cross-concordance method, a conditional probability based method and the RST model.
  11. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.02
    0.015707046 = product of:
      0.047121134 = sum of:
        0.047121134 = product of:
          0.09424227 = sum of:
            0.09424227 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.09424227 = score(doc=8365,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2015 16:08:38
  12. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.01
    0.014187284 = product of:
      0.04256185 = sum of:
        0.04256185 = product of:
          0.0851237 = sum of:
            0.0851237 = weight(_text_:indexing in 3109) [ClassicSimilarity], result of:
              0.0851237 = score(doc=3109,freq=14.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.4475803 = fieldWeight in 3109, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3109)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
  13. Stempfhuber, M.; Zapilko, B.: Modelling text-fact-integration in digital libraries (2009) 0.01
    0.013931636 = product of:
      0.041794907 = sum of:
        0.041794907 = product of:
          0.083589815 = sum of:
            0.083589815 = weight(_text_:indexing in 3393) [ClassicSimilarity], result of:
              0.083589815 = score(doc=3393,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.4395151 = fieldWeight in 3393, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3393)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Digital Libraries currently face the challenge of integrating many different types of research information (e.g. publications, primary data, expert's profiles, institutional profiles, project information etc.) according to their scientific users' needs. To date no general, integrated model for knowledge organization and retrieval in Digital Libraries exists. This causes the problem of structural and semantic heterogeneity due to the wide range of metadata standards, indexing vocabularies and indexing approaches used for different types of information. The research presented in this paper focuses on areas in which activities are being undertaken in the field of Digital Libraries in order to treat semantic interoperability problems. We present a model for the integrated retrieval of factual and textual data which combines multiple approaches to semantic interoperability und sets them into context. Embedded in the research cycle, traditional content indexing methods for publications meet the newer, but rarely used ontology-based approaches which seem to be better suited for representing complex information like the one contained in survey data. The benefits of our model are (1) easy re-use of available knowledge organisation systems and (2) reduced efforts for domain modelling with ontologies.
  14. Dini, L.: CACAO : multilingual access to bibliographic records (2007) 0.01
    0.0134631805 = product of:
      0.04038954 = sum of:
        0.04038954 = product of:
          0.08077908 = sum of:
            0.08077908 = weight(_text_:22 in 126) [ClassicSimilarity], result of:
              0.08077908 = score(doc=126,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.46428138 = fieldWeight in 126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=126)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  15. Boteram, F.; Hubrich, J.: Towards a comprehensive international Knowledge Organization System (2008) 0.01
    0.0134631805 = product of:
      0.04038954 = sum of:
        0.04038954 = product of:
          0.08077908 = sum of:
            0.08077908 = weight(_text_:22 in 4786) [ClassicSimilarity], result of:
              0.08077908 = score(doc=4786,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.46428138 = fieldWeight in 4786, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4786)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2008 19:30:41
  16. Nilbe, S.: Semiautomatic merging of two universal thesauri : the case of Estonia (2011) 0.01
    0.0134057235 = product of:
      0.04021717 = sum of:
        0.04021717 = product of:
          0.08043434 = sum of:
            0.08043434 = weight(_text_:indexing in 3152) [ClassicSimilarity], result of:
              0.08043434 = score(doc=3152,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.42292362 = fieldWeight in 3152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3152)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Subject access: preparing for the future. Conference on August 20 - 21, 2009 in Florence, the IFLA Classification and Indexing Section sponsored an IFLA satellite conference entitled "Looking at the Past and Preparing for the Future". Eds.: P. Landry et al
  17. Ahmed, M.; Mukhopadhyay, M.; Mukhopadhyay, P.: Automated knowledge organization : AI ML based subject indexing system for libraries (2023) 0.01
    0.0134057235 = product of:
      0.04021717 = sum of:
        0.04021717 = product of:
          0.08043434 = sum of:
            0.08043434 = weight(_text_:indexing in 977) [ClassicSimilarity], result of:
              0.08043434 = score(doc=977,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.42292362 = fieldWeight in 977, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=977)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The research study as reported here is an attempt to explore the possibilities of an AI/ML-based semi-automated indexing system in a library setup to handle large volumes of documents. It uses the Python virtual environment to install and configure an open source AI environment (named Annif) to feed the LOD (Linked Open Data) dataset of Library of Congress Subject Headings (LCSH) as a standard KOS (Knowledge Organisation System). The framework deployed the Turtle format of LCSH after cleaning the file with Skosify, applied an array of backend algorithms (namely TF-IDF, Omikuji, and NN-Ensemble) to measure relative performance, and selected Snowball as an analyser. The training of Annif was conducted with a large set of bibliographic records populated with subject descriptors (MARC tag 650$a) and indexed by trained LIS professionals. The training dataset is first treated with MarcEdit to export it in a format suitable for OpenRefine, and then in OpenRefine it undergoes many steps to produce a bibliographic record set suitable to train Annif. The framework, after training, has been tested with a bibliographic dataset to measure indexing efficiencies, and finally, the automated indexing framework is integrated with data wrangling software (OpenRefine) to produce suggested headings on a mass scale. The entire framework is based on open-source software, open datasets, and open standards.
  18. Hubrich, J.; Mengel, T.; Müller, K.; Jacobs, J.-H.: Improving subject access in global information spaces : reflections upon internationalization and localization of Knowledge Organization Systems (KOS) (2008) 0.01
    0.013270989 = product of:
      0.039812967 = sum of:
        0.039812967 = product of:
          0.079625934 = sum of:
            0.079625934 = weight(_text_:indexing in 2190) [ClassicSimilarity], result of:
              0.079625934 = score(doc=2190,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.41867304 = fieldWeight in 2190, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2190)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    With the establishment of global information spaces that are characterized by heterogeneity new kinds of knowledge organization systems (KOS) are needed to facilitate efficient subject access to available information resources. KOS need not to be built bottom-up. Internationalization and localization of common KOS enable making use of all different kinds of existing data from subject indexing for retrieval purposes and help creating a user-friendly tool that supports cross-national query modification and hermeneutic processes of information seeking as well as precise topical queries.
    Source
    New pespectives on subject indexing and classification: essays in honour of Magda Heiner-Freiling. Red.: K. Knull-Schlomann, u.a
  19. Svensson, L.G.: Unified access : a semantic Web based model for multilingual navigation in heterogeneous data sources (2008) 0.01
    0.011375135 = product of:
      0.034125403 = sum of:
        0.034125403 = product of:
          0.068250805 = sum of:
            0.068250805 = weight(_text_:indexing in 2191) [ClassicSimilarity], result of:
              0.068250805 = score(doc=2191,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.3588626 = fieldWeight in 2191, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2191)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Most online library catalogues are not well equipped for subject search. On the one hand it is difficult to navigate the structures of the thesauri and classification systems used for indexing. Further, there is little or no support for the integration of crosswalks between different controlled vocabularies, so that a subject search query formulated using one controlled vocabulary will not find resources indexed with another knowledge organisation system even if there exists a crosswalk between them. In this paper we will look at SemanticWeb technologies and a prototype system leveraging those technologies in order to enhance the subject search possibilities in heterogeneously indexed repositories. Finally, we will have a brief look at different initiatives aimed at integrating library data into the SemanticWeb.
    Source
    New pespectives on subject indexing and classification: essays in honour of Magda Heiner-Freiling. Red.: K. Knull-Schlomann, u.a
  20. Landry, P.: ¬The evolution of subject heading languages in Europe and their impact on subject access interoperability (2008) 0.01
    0.011375135 = product of:
      0.034125403 = sum of:
        0.034125403 = product of:
          0.068250805 = sum of:
            0.068250805 = weight(_text_:indexing in 2192) [ClassicSimilarity], result of:
              0.068250805 = score(doc=2192,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.3588626 = fieldWeight in 2192, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2192)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Work in establishing interoperability between Subject Heading Languages (SHLs) in Europe is fairly recent and much work is still needed before users can successfully conduct subject searches across information resources in European libraries. Over the last 25 years many subject heading lists were created or developed from existing ones. Obstacles for effective interoperability have been progressively lifted which has paved the way for interoperability projects to achieve some encouraging results. This paper will look at interoperability approaches in the area of subject indexing tools and will present a short overview of the development of European SHLs. It will then look at the conditions necessary for effective and comprehensive interoperability using the method of linking subject headings, as used by the »Multilingual Access to Subject Headings project« (MACS).
    Source
    New pespectives on subject indexing and classification: essays in honour of Magda Heiner-Freiling. Red.: K. Knull-Schlomann, u.a

Languages

  • e 65
  • d 12

Types

  • a 57
  • el 20
  • m 4
  • s 2
  • x 2
  • p 1
  • More… Less…