Search (15 results, page 1 of 1)

  • × year_i:[2010 TO 2020}
  • × author_ss:"Soergel, D."
  1. Soergel, D.: Knowledge organization for learning (2014) 0.03
    0.03423535 = product of:
      0.0684707 = sum of:
        0.0684707 = sum of:
          0.00669738 = weight(_text_:a in 1400) [ClassicSimilarity], result of:
            0.00669738 = score(doc=1400,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.12611452 = fieldWeight in 1400, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1400)
          0.061773323 = weight(_text_:22 in 1400) [ClassicSimilarity], result of:
            0.061773323 = score(doc=1400,freq=4.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.38301262 = fieldWeight in 1400, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1400)
      0.5 = coord(1/2)
    
    Abstract
    This paper discusses and illustrates through examples how meaningful or deep learning can be supported through well-structured presentation of material, through giving learners schemas they can use to organize knowledge in their minds, and through helping learners to understand knowledge organization principles they can use to construct their own schemas. It is a call to all authors, educators and information designers to pay attention to meaningful presentation that expresses the internal structure of the domain and facilitates the learner's assimilation of concepts and their relationships.
    Pages
    S.22-32
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  2. Berti, Jr., D.W.; Lima, G.; Maculan, B.; Soergel, D.: Computer-assisted checking of conceptual relationships in a large thesaurus (2018) 0.03
    0.028787265 = product of:
      0.05757453 = sum of:
        0.05757453 = sum of:
          0.007654148 = weight(_text_:a in 4721) [ClassicSimilarity], result of:
            0.007654148 = score(doc=4721,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.14413087 = fieldWeight in 4721, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=4721)
          0.04992038 = weight(_text_:22 in 4721) [ClassicSimilarity], result of:
            0.04992038 = score(doc=4721,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.30952093 = fieldWeight in 4721, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4721)
      0.5 = coord(1/2)
    
    Date
    17. 1.2019 19:04:22
    Type
    a
  3. Ahn, J.-w.; Soergel, D.; Lin, X.; Zhang, M.: Mapping between ARTstor terms and the Getty Art and Architecture Thesaurus (2014) 0.02
    0.023258494 = product of:
      0.04651699 = sum of:
        0.04651699 = sum of:
          0.009076704 = weight(_text_:a in 1421) [ClassicSimilarity], result of:
            0.009076704 = score(doc=1421,freq=10.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.1709182 = fieldWeight in 1421, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=1421)
          0.037440285 = weight(_text_:22 in 1421) [ClassicSimilarity], result of:
            0.037440285 = score(doc=1421,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 1421, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1421)
      0.5 = coord(1/2)
    
    Abstract
    To make better use of knowledge organization systems (KOS) for query expansion, we have developed a pattern-based technique for composition ontology mapping in a specific domain. The technique was tested in a two-step mapping. The user's free-text queries were first mapped to Getty's Art & Architecture Thesaurus (AAT) terms. The AAT-based queries were then mapped to a search engine's indexing vocabulary (ARTstor terms). The result indicated that our technique has improved the mapping success rate from 40% to 70%. We discuss also how the technique may be applied to other KOS mapping and how it may be implemented in practical systems.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  4. Zhang, P.; Soergel, D.: Towards a comprehensive model of the cognitive process and mechanisms of individual sensemaking (2014) 0.02
    0.020074995 = product of:
      0.04014999 = sum of:
        0.04014999 = sum of:
          0.00894975 = weight(_text_:a in 1344) [ClassicSimilarity], result of:
            0.00894975 = score(doc=1344,freq=14.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.1685276 = fieldWeight in 1344, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1344)
          0.03120024 = weight(_text_:22 in 1344) [ClassicSimilarity], result of:
            0.03120024 = score(doc=1344,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 1344, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1344)
      0.5 = coord(1/2)
    
    Abstract
    This review introduces a comprehensive model of the cognitive process and mechanisms of individual sensemaking to provide a theoretical basis for: - empirical studies that improve our understanding of the cognitive process and mechanisms of sensemaking and integration of results of such studies; - education in critical thinking and sensemaking skills; - the design of sensemaking assistant tools that support and guide users. The paper reviews and extends existing sensemaking models with ideas from learning and cognition. It reviews literature on sensemaking models in human-computer interaction (HCI), cognitive system engineering, organizational communication, and library and information sciences (LIS), learning theories, cognitive psychology, and task-based information seeking. The model resulting from this synthesis moves to a stronger basis for explaining sensemaking behaviors and conceptual changes. The model illustrates the iterative processes of sensemaking, extends existing models that focus on activities by integrating cognitive mechanisms and the creation of instantiated structure elements of knowledge, and different types of conceptual change to show a complete picture of the cognitive processes of sensemaking. The processes and cognitive mechanisms identified provide better foundations for knowledge creation, organization, and sharing practices and a stronger basis for design of sensemaking assistant systems and tools.
    Date
    22. 8.2014 16:55:39
    Type
    a
  5. Soergel, D.: Unleashing the power of data through organization : structure and connections for meaning, learning and discovery (2015) 0.02
    0.018982807 = product of:
      0.037965614 = sum of:
        0.037965614 = sum of:
          0.006765375 = weight(_text_:a in 2376) [ClassicSimilarity], result of:
            0.006765375 = score(doc=2376,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.12739488 = fieldWeight in 2376, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2376)
          0.03120024 = weight(_text_:22 in 2376) [ClassicSimilarity], result of:
            0.03120024 = score(doc=2376,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 2376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2376)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge organization is needed everywhere. Its importance is marked by its pervasiveness. This paper will show many areas, tasks, and functions where proper use of knowledge organization, construed as broadly as the term implies, provides support for learning and understanding, for sense making and meaning making, for inference, and for discovery by people and computer programs and thereby will make the world a better place. The paper focuses not on metadata but rather on structuring and representing the actual data or knowledge itself and argues for more communication between the largely separated KO, ontology, data modeling, and semantic web communities to address the many problems that need better solutions. In particular, the paper discusses the application of knowledge organization in knowledge bases for question answering and cognitive systems, knowledge bases for information extraction from text or multimedia, linked data, big data and data analytics, electronic health records as one example, influence diagrams (causal maps), dynamic system models, process diagrams, concept maps, and other node-link diagrams, information systems in organizations, knowledge organization for understanding and learning, and knowledge transfer between domains. The paper argues for moving beyond triples to a more powerful representation using entities and multi-way relationships but not attributes.
    Content
    Selected Papers from "Knowledge Organization, Making a Difference: ISKO-UK Biennial Conference, 13th-14th July 2015, London. Vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_42_2015_6.
    Date
    27.11.2015 20:52:22
    Type
    a
  6. Soergel, D.; Helfer, O.: ¬A metrics ontology : an intellectual infrastructure for defining, managing, and applying metrics (2016) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 4927) [ClassicSimilarity], result of:
              0.0108246 = score(doc=4927,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 4927, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4927)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization for a sustainable world: challenges and perspectives for cultural, scientific, and technological sharing in a connected society : proceedings of the Fourteenth International ISKO Conference 27-29 September 2016, Rio de Janeiro, Brazil / organized by International Society for Knowledge Organization (ISKO), ISKO-Brazil, São Paulo State University ; edited by José Augusto Chaves Guimarães, Suellen Oliveira Milani, Vera Dodebei
    Type
    a
  7. Huang, X.; Soergel, D.: Relevance: an improved framework for explicating the notion (2013) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 527) [ClassicSimilarity], result of:
              0.010148063 = score(doc=527,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 527, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=527)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Synthesizing and building on many ideas from the literature, this article presents an improved conceptual framework that clarifies the notion of relevance with its many elements, variables, criteria, and situational factors. Relevance is defined as a Relationship (R) between an Information Object (I) and an Information Need (N) (which consists of Topic, User, Problem/Task, and Situation/Context) with focus on R. This defines Relevance-as-is (conceptual relevance, strong relevance). To determine relevance, an Agent A (a person or system) operates on a representation I? of the information object and a representation N? of the information need, resulting in relevance-as-determined (operational measure of relevance, weak relevance, an approximation). Retrieval tests compare relevance-as-determined by different agents. This article discusses and compares two major approaches to conceptualizing relevance: the entity-focused approach (focus on elaborating the entities involved in relevance) and the relationship-focused approach (focus on explicating the relational nature of relevance). The article argues that because relevance is fundamentally a relational construct the relationship-focused approach deserves a higher priority and more attention than it has received. The article further elaborates on the elements of the framework with a focus on clarifying several critical issues on the discourse on relevance.
    Type
    a
  8. Soergel, D.: Towards a relation ontology for the Semantic Web (2011) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 4342) [ClassicSimilarity], result of:
              0.00994303 = score(doc=4342,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 4342, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4342)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Semantic Web consists of data structured for use by computer programs, such as data sets made available under the Linked Open Data initiative. Much of this data is structured following the entity-relationship model encoded in RDF for syntactic interoperability. For semantic interoperability, the semantics of the relationships used in any given dataset needs to be made explicit. Ultimately this requires an inventory of these relationships structured around a relation ontology. This talk will outline a blueprint for such an inventory, including a format for the description/definition of binary and n-ary relations, drawing on ideas put forth in the classification and thesaurus community over the last 60 years, upper level ontologies, systems like FrameNet, the Buffalo Relation Ontology, and an analysis of linked data sets.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
    Type
    a
  9. Golub, K.; Hansson, J.; Soergel, D.; Tudhope, D.: Managing classification in libraries : a methodological outline for evaluating automatic subject indexing and classification in Swedish library catalogues (2015) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 2300) [ClassicSimilarity], result of:
              0.009567685 = score(doc=2300,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 2300, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2300)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Subject terms play a crucial role in resource discovery but require substantial effort to produce. Automatic subject classification and indexing address problems of scale and sustainability and can be used to enrich existing bibliographic records, establish more connections across and between resources and enhance consistency of bibliographic data. The paper aims to put forward a complex methodological framework to evaluate automatic classification tools of Swedish textual documents based on the Dewey Decimal Classification (DDC) recently introduced to Swedish libraries. Three major complementary approaches are suggested: a quality-built gold standard, retrieval effects, domain analysis. The gold standard is built based on input from at least two catalogue librarians, end-users expert in the subject, end users inexperienced in the subject and automated tools. Retrieval effects are studied through a combination of assigned and free tasks, including factual and comprehensive types. The study also takes into consideration the different role and character of subject terms in various knowledge domains, such as scientific disciplines. As a theoretical framework, domain analysis is used and applied in relation to the implementation of DDC in Swedish libraries and chosen domains of knowledge within the DDC itself.
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro
    Type
    a
  10. Deng, J.; Soergel, D.: Concept maps to support paper topic exploration and student-advisor communication (2016) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 4920) [ClassicSimilarity], result of:
              0.009374379 = score(doc=4920,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 4920, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4920)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization for a sustainable world: challenges and perspectives for cultural, scientific, and technological sharing in a connected society : proceedings of the Fourteenth International ISKO Conference 27-29 September 2016, Rio de Janeiro, Brazil / organized by International Society for Knowledge Organization (ISKO), ISKO-Brazil, São Paulo State University ; edited by José Augusto Chaves Guimarães, Suellen Oliveira Milani, Vera Dodebei
    Type
    a
  11. Golub, K.; Soergel, D.; Buchanan, G.; Tudhope, D.; Lykke, M.; Hiom, D.: ¬A framework for evaluating automatic indexing or classification in the context of retrieval (2016) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 3311) [ClassicSimilarity], result of:
              0.00894975 = score(doc=3311,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 3311, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3311)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Tools for automatic subject assignment help deal with scale and sustainability in creating and enriching metadata, establishing more connections across and between resources and enhancing consistency. Although some software vendors and experimental researchers claim the tools can replace manual subject indexing, hard scientific evidence of their performance in operating information environments is scarce. A major reason for this is that research is usually conducted in laboratory conditions, excluding the complexities of real-life systems and situations. The article reviews and discusses issues with existing evaluation approaches such as problems of aboutness and relevance assessments, implying the need to use more than a single "gold standard" method when evaluating indexing and retrieval, and proposes a comprehensive evaluation framework. The framework is informed by a systematic review of the literature on evaluation approaches: evaluating indexing quality directly through assessment by an evaluator or through comparison with a gold standard, evaluating the quality of computer-assisted indexing directly in the context of an indexing workflow, and evaluating indexing quality indirectly through analyzing retrieval performance.
    Type
    a
  12. Huang, X.; Soergel, D.; Klavans, J.L.: Modeling and analyzing the topicality of art images (2015) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 2127) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=2127,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 2127, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2127)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study demonstrates an improved conceptual foundation to support well-structured analysis of image topicality. First we present a conceptual framework for analyzing image topicality, explicating the layers, the perspectives, and the topical relevance relationships involved in modeling the topicality of art images. We adapt a generic relevance typology to image analysis by extending it with definitions and relationships specific to the visual art domain and integrating it with schemes of image-text relationships that are important for image subject indexing. We then apply the adapted typology to analyze the topical relevance relationships between 11 art images and 768 image tags assigned by art historians and librarians. The original contribution of our work is the topical structure analysis of image tags that allows the viewer to more easily grasp the content, context, and meaning of an image and quickly tune into aspects of interest; it could also guide both the indexer and the searcher to specify image tags/descriptors in a more systematic and precise manner and thus improve the match between the two parties. An additional contribution is systematically examining and integrating the variety of image-text relationships from a relevance perspective. The paper concludes with implications for relational indexing and social tagging.
    Type
    a
  13. Soergel, D.; Popescu, D.: Organization authority database design with classification principles (2015) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 2293) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=2293,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 2293, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2293)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We illustrate the principle of unified treatment of all authority data for any kind of entities, subjects/topics, places, events, persons, organizations, etc. through the design and implementation of an enriched authority database for organizations, maintained as an integral part of an authority database that also includes subject authority control / classification data, using the same structures for data and common modules for processing and display of data. Organization-related data are stored in information systems of many companies. We specifically examine the case of the World Bank Group (WBG) according to organization role: suppliers, partners, customers, competitors, authors, publishers, or subjects of documents, loan recipients, suppliers for WBG-funded projects and subunits of the organization itself. A central organization authority where each organization is identified by a URI, represented by several names and linked to other organizations through hierarchical and other relationships serves to link data from these disparate information systems. Designing the conceptual structure of a unified authority database requires integrating SKOS, the W3C Organization Ontology and other schemes into one comprehensive ontology. To populate the authority database with organizations, we import data from external sources (e.g., DBpedia and Library of Congress authorities) and internal sources (e.g., the lists of organizations from multiple WBG information systems).
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro
    Type
    a
  14. Soergel, D.: Conceptual foundations for semantic mapping and semantic search (2011) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 3939) [ClassicSimilarity], result of:
              0.007030784 = score(doc=3939,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 3939, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3939)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article proposes an approach to mapping between Knowledge Organization Systems (KOS), including ontologies, classifications, taxonomies, and thesauri and even natural languages, that is based on deep semantics. In this approach, concepts in each KOS are expressed through canonical expressions, such as description logic formulas, that combine atomic (or elemental) concepts drawn from a core classification. Relationships between concepts within or across KOS can then be derived by reasoning over the canonical expressions. The canonical expressions can also be used to provide a facet-based query formulation front-end for free-text search. The article illustrates this approach through many examples. It presents methods for the efficient construction of canonical expressions (linguistic analysis, exploiting information in the KOS and their hierarchies, and crowdsourcing) that make this approach feasible.
    Type
    a
  15. Balakrishnan, U.; Voß, J.; Soergel, D.: Towards integrated systems for KOS management, mapping, and access : Coli-conc and its collaborative computer-assisted KOS mapping tool Cocoda (2018) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 4825) [ClassicSimilarity], result of:
              0.0054123 = score(doc=4825,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 4825, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4825)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a