Search (94 results, page 1 of 5)

  • × theme_ss:"Semantische Interoperabilität"
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.19
    0.19439808 = product of:
      0.25919744 = sum of:
        0.06253555 = product of:
          0.18760663 = sum of:
            0.18760663 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.18760663 = score(doc=1000,freq=2.0), product of:
                0.4005707 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047248192 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.18760663 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.18760663 = score(doc=1000,freq=2.0), product of:
            0.4005707 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047248192 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.009055263 = product of:
          0.018110527 = sum of:
            0.018110527 = weight(_text_:science in 1000) [ClassicSimilarity], result of:
              0.018110527 = score(doc=1000,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.1455159 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  2. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.18
    0.17509954 = product of:
      0.35019907 = sum of:
        0.08754977 = product of:
          0.2626493 = sum of:
            0.2626493 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.2626493 = score(doc=306,freq=2.0), product of:
                0.4005707 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047248192 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.2626493 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.2626493 = score(doc=306,freq=2.0), product of:
            0.4005707 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047248192 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  3. Landry, P.: MACS: multilingual access to subject and link management : Extending the Multilingual Capacity of TEL in the EDL Project (2007) 0.05
    0.04565732 = product of:
      0.09131464 = sum of:
        0.059307255 = weight(_text_:management in 1287) [ClassicSimilarity], result of:
          0.059307255 = score(doc=1287,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.37240356 = fieldWeight in 1287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.078125 = fieldNorm(doc=1287)
        0.03200739 = product of:
          0.06401478 = sum of:
            0.06401478 = weight(_text_:22 in 1287) [ClassicSimilarity], result of:
              0.06401478 = score(doc=1287,freq=2.0), product of:
                0.16545512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047248192 = queryNorm
                0.38690117 = fieldWeight in 1287, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1287)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  4. Metadata and semantics research : 9th Research Conference, MTSR 2015, Manchester, UK, September 9-11, 2015, Proceedings (2015) 0.04
    0.03731085 = product of:
      0.0746217 = sum of:
        0.050323877 = weight(_text_:management in 3274) [ClassicSimilarity], result of:
          0.050323877 = score(doc=3274,freq=4.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.31599492 = fieldWeight in 3274, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=3274)
        0.024297824 = product of:
          0.04859565 = sum of:
            0.04859565 = weight(_text_:science in 3274) [ClassicSimilarity], result of:
              0.04859565 = score(doc=3274,freq=10.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.39046016 = fieldWeight in 3274, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3274)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    LCSH
    Computer science
    Database management
    Text processing (Computer science)
    Series
    Communications in computer and information science; 544
    Subject
    Computer science
    Database management
    Text processing (Computer science)
  5. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.03
    0.03473606 = product of:
      0.06947212 = sum of:
        0.05136159 = weight(_text_:management in 3934) [ClassicSimilarity], result of:
          0.05136159 = score(doc=3934,freq=6.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.32251096 = fieldWeight in 3934, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.018110527 = product of:
          0.036221053 = sum of:
            0.036221053 = weight(_text_:science in 3934) [ClassicSimilarity], result of:
              0.036221053 = score(doc=3934,freq=8.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.2910318 = fieldWeight in 3934, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3934)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This volume contains the lecture notes of the 13th Reasoning Web Summer School, RW 2017, held in London, UK, in July 2017. In 2017, the theme of the school was "Semantic Interoperability on the Web", which encompasses subjects such as data integration, open data management, reasoning over linked data, database to ontology mapping, query answering over ontologies, hybrid reasoning with rules and ontologies, and ontology-based dynamic systems. The papers of this volume focus on these topics and also address foundational reasoning techniques used in answer set programming and ontologies.
    LCSH
    Computer science
    Database management
    Computer Science
    Subject
    Computer science
    Database management
    Computer Science
  6. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.03
    0.032947272 = product of:
      0.065894544 = sum of:
        0.041936565 = weight(_text_:management in 2192) [ClassicSimilarity], result of:
          0.041936565 = score(doc=2192,freq=4.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.2633291 = fieldWeight in 2192, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.023957977 = product of:
          0.047915954 = sum of:
            0.047915954 = weight(_text_:science in 2192) [ClassicSimilarity], result of:
              0.047915954 = score(doc=2192,freq=14.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.38499892 = fieldWeight in 2192, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
    Content
    Metadata and linked data.- Tools and models.- (Meta)data quality assessment and curation.- Semantic interoperability, ontology-based data access and representation.- Big data and digital libraries in health, science and technology.- Metadata and semantics for open repositories, research information systems and data infrastructure.- Metadata and semantics for cultural collections and applications.- Semantics for agriculture, food and environment.
    LCSH
    Computer science
    Database management
    Text processing (Computer science)
    Series
    Communications in computer and information science; 478
    Subject
    Computer science
    Database management
    Text processing (Computer science)
  7. Petras, V.: Heterogenitätsbehandlung und Terminology Mapping durch Crosskonkordanzen : eine Fallstudie (2010) 0.03
    0.031960126 = product of:
      0.06392025 = sum of:
        0.04151508 = weight(_text_:management in 3730) [ClassicSimilarity], result of:
          0.04151508 = score(doc=3730,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.2606825 = fieldWeight in 3730, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3730)
        0.022405172 = product of:
          0.044810344 = sum of:
            0.044810344 = weight(_text_:22 in 3730) [ClassicSimilarity], result of:
              0.044810344 = score(doc=3730,freq=2.0), product of:
                0.16545512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047248192 = queryNorm
                0.2708308 = fieldWeight in 3730, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3730)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Das BMBF hat bis Ende 2007 ein Projekt gefördert, dessen Aufgabe es war, die Erstellung und das Management von Crosskonkordanzen zwischen kontrollierten Vokabularen (Thesauri, Klassifikationen, Deskriptorenlisten) zu organisieren. In drei Jahren wurden 64 Crosskonkordanzen mit mehr als 500.000 Relationen zwischen kontrollierten Vokabularen aus den Sozialwissenschaften und anderen Fachgebieten umgesetzt. In der Schlussphase des Projekts wurde eine umfangreiche Evaluation durchgeführt, die die Effektivität der Crosskonkordanzen in unterschiedlichen Informationssystemen testen sollte. Der Artikel berichtet über die Anwendungsmöglichkeiten der Heterogenitätsbehandlung durch Crosskonkordanzen und die Ergebnisse der umfangreichen Analysen.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  8. Sakr, S.; Wylot, M.; Mutharaju, R.; Le-Phuoc, D.; Fundulaki, I.: Linked data : storing, querying, and reasoning (2018) 0.03
    0.028845333 = product of:
      0.057690665 = sum of:
        0.047445804 = weight(_text_:management in 5329) [ClassicSimilarity], result of:
          0.047445804 = score(doc=5329,freq=8.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.29792285 = fieldWeight in 5329, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=5329)
        0.010244861 = product of:
          0.020489722 = sum of:
            0.020489722 = weight(_text_:science in 5329) [ClassicSimilarity], result of:
              0.020489722 = score(doc=5329,freq=4.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.16463245 = fieldWeight in 5329, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5329)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This book describes efficient and effective techniques for harnessing the power of Linked Data by tackling the various aspects of managing its growing volume: storing, querying, reasoning, provenance management and benchmarking. To this end, Chapter 1 introduces the main concepts of the Semantic Web and Linked Data and provides a roadmap for the book. Next, Chapter 2 briefly presents the basic concepts underpinning Linked Data technologies that are discussed in the book. Chapter 3 then offers an overview of various techniques and systems for centrally querying RDF datasets, and Chapter 4 outlines various techniques and systems for efficiently querying large RDF datasets in distributed environments. Subsequently, Chapter 5 explores how streaming requirements are addressed in current, state-of-the-art RDF stream data processing. Chapter 6 covers performance and scaling issues of distributed RDF reasoning systems, while Chapter 7 details benchmarks for RDF query engines and instance matching systems. Chapter 8 addresses the provenance management for Linked Data and presents the different provenance models developed. Lastly, Chapter 9 offers a brief summary, highlighting and providing insights into some of the open challenges and research directions. Providing an updated overview of methods, technologies and systems related to Linked Data this book is mainly intended for students and researchers who are interested in the Linked Data domain. It enables students to gain an understanding of the foundations and underpinning technologies and standards for Linked Data, while researchers benefit from the in-depth coverage of the emerging and ongoing advances in Linked Data storing, querying, reasoning, and provenance management systems. Further, it serves as a starting point to tackle the next research challenges in the domain of Linked Data management.
    LCSH
    Computer science
    Subject
    Computer science
  9. Celli, F. et al.: Enabling multilingual search through controlled vocabularies : the AGRIS approach (2016) 0.03
    0.025058959 = product of:
      0.100235835 = sum of:
        0.100235835 = sum of:
          0.036221053 = weight(_text_:science in 3278) [ClassicSimilarity], result of:
            0.036221053 = score(doc=3278,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.2910318 = fieldWeight in 3278, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.078125 = fieldNorm(doc=3278)
          0.06401478 = weight(_text_:22 in 3278) [ClassicSimilarity], result of:
            0.06401478 = score(doc=3278,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.38690117 = fieldWeight in 3278, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3278)
      0.25 = coord(1/4)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  10. Ledl, A.: Demonstration of the BAsel Register of Thesauri, Ontologies & Classifications (BARTOC) (2015) 0.02
    0.023225334 = product of:
      0.046450667 = sum of:
        0.035584353 = weight(_text_:management in 2038) [ClassicSimilarity], result of:
          0.035584353 = score(doc=2038,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 2038, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2038)
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 2038) [ClassicSimilarity], result of:
              0.021732632 = score(doc=2038,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 2038, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2038)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The BAsel Register of Thesauri, Ontologies & Classifications (BARTOC, http://bartoc.org) is a bibliographic database aiming to record metadata of as many Knowledge Organization Systems as possible. It has a facetted, responsive web design search interface in 20 EU languages. With more than 1'300 interdisciplinary items in 77 languages, BARTOC is the largest database of its kind, multilingual both by content and features, and it is still growing. This being said, the demonstration of BARTOC would be suitable for topic nr. 10 [Multilingual and Interdisciplinary KOS applications and tools]. BARTOC has been developed by the University Library of Basel, Switzerland. It is rooted in the tradition of library and information science of collecting bibliographic records of controlled and structured vocabularies, yet in a more contemporary manner. BARTOC is based on the open source content management system Drupal 7.
  11. Zeng, M.L.; Chan, L.M.: Trends and issues in establishing interoperability among knowledge organization systems (2004) 0.02
    0.023225334 = product of:
      0.046450667 = sum of:
        0.035584353 = weight(_text_:management in 2224) [ClassicSimilarity], result of:
          0.035584353 = score(doc=2224,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 2224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2224)
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 2224) [ClassicSimilarity], result of:
              0.021732632 = score(doc=2224,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 2224, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2224)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This report analyzes the methodologies used in establishing interoperability among knowledge organization systems (KOS) such as controlled vocabularies and classification schemes that present the organized interpretation of knowledge structures. The development and trends of KOS are discussed with reference to the online era and the Internet era. Selected current projects and activities addressing KOS interoperability issues are reviewed in terms of the languages and structures involved. The methodological analysis encompasses both conventional and new methods that have proven to be widely accepted, including derivation/modeling, translation/adaptation, satellite and leaf node linking, direct mapping, co-occurrence mapping, switching, linking through a temporary union list, and linking through a thesaurus server protocol. Methods used in link storage and management, as weIl as common issues regarding mapping and methodological options, are also presented. It is concluded that interoperability of KOS is an unavoidable issue and process in today's networked environment. There have been and will be many multilingual products and services, with many involving various structured systems. Results from recent efforts are encouraging.
    Source
    Journal of the American Society for Information Science and technology. 55(2004) no.5, S.377-395
  12. Zhang, X.: Concept integration of document databases using different indexing languages (2006) 0.02
    0.023225334 = product of:
      0.046450667 = sum of:
        0.035584353 = weight(_text_:management in 962) [ClassicSimilarity], result of:
          0.035584353 = score(doc=962,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=962)
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 962) [ClassicSimilarity], result of:
              0.021732632 = score(doc=962,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=962)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    An integrated information retrieval system generally contains multiple databases that are inconsistent in terms of their content and indexing. This paper proposes a rough set-based transfer (RST) model for integration of the concepts of document databases using various indexing languages, so that users can search through the multiple databases using any of the current indexing languages. The RST model aims to effectively create meaningful transfer relations between the terms of two indexing languages, provided a number of documents are indexed with them in parallel. In our experiment, the indexing concepts of two databases respectively using the Thesaurus of Social Science (IZ) and the Schlagwortnormdatei (SWD) are integrated by means of the RST model. Finally, this paper compares the results achieved with a cross-concordance method, a conditional probability based method and the RST model.
    Source
    Information processing and management. 42(2006) no.1, S.121-135
  13. Panzer, M.: Increasing patient findability of medical research : annotating clinical trials using standard vocabularies (2017) 0.02
    0.023225334 = product of:
      0.046450667 = sum of:
        0.035584353 = weight(_text_:management in 2783) [ClassicSimilarity], result of:
          0.035584353 = score(doc=2783,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 2783, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2783)
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 2783) [ClassicSimilarity], result of:
              0.021732632 = score(doc=2783,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 2783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2783)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Multiple groups at Mayo Clinic organize knowledge with the aid of metadata for a variety of purposes. The ontology group focuses on consumer-oriented health information using several controlled vocabularies to support and coordinate care providers, consumers, clinical knowledge and, as part of its research management, information on clinical trials. Poor findability, inconsistent indexing and specialized language undermined the goal of increasing trial participation. The ontology group designed a metadata framework addressing disorders and procedures, investigational drugs and clinical departments, adopted and translated the clinical terminology of SNOMED CT and RxNorm vocabularies to consumer language and coordinated terminology with Mayo's Consumer Health Vocabulary. The result enables retrieval of clinical trial information from multiple access points including conditions, procedures, drug names, organizations involved and trial phase. The jump in inquiries since the search site was revised and vocabularies were modified show evidence of success.
    Source
    Bulletin of the Association for Information Science and Technology. 43(2017) no.2, S.40-43
  14. Garcia Marco, F.J.: Compatibility & heterogeneity in knowledge organization : some reflections around a case study in the field of consumer information (2008) 0.02
    0.02282866 = product of:
      0.04565732 = sum of:
        0.029653627 = weight(_text_:management in 1678) [ClassicSimilarity], result of:
          0.029653627 = score(doc=1678,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 1678, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1678)
        0.016003694 = product of:
          0.03200739 = sum of:
            0.03200739 = weight(_text_:22 in 1678) [ClassicSimilarity], result of:
              0.03200739 = score(doc=1678,freq=2.0), product of:
                0.16545512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047248192 = queryNorm
                0.19345059 = fieldWeight in 1678, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1678)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A case study in compatibility and heterogeneity of knowledge organization (KO) systems and processes is presented. It is based in the experience of the author in the field of information for consumer protection, a good example of the emerging transdisciplinary applied social sciences. The activities and knowledge organization problems and solutions of the Aragonian Consumers' Information and Documentation Centre are described and analyzed. Six assertions can be concluded: a) heterogeneity and compatibility are certainly an inherent problem in knowledge organization and also in practical domains; b) knowledge organization is also a social task, not only a lögical one; c) knowledge organization is affected by economical and efficiency considerations; d) knowledge organization is at the heart of Knowledge Management; e) identifying and maintaining the focus in interdisciplinary fields is a must; f the different knowledge organization tools of a institution must be considered as an integrated system, pursuing a unifying model.
    Date
    16. 3.2008 18:22:50
  15. Godby, C.J.; Smith, D.; Childress, E.: Encoding application profiles in a computational model of the crosswalk (2008) 0.02
    0.02282866 = product of:
      0.04565732 = sum of:
        0.029653627 = weight(_text_:management in 2649) [ClassicSimilarity], result of:
          0.029653627 = score(doc=2649,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 2649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2649)
        0.016003694 = product of:
          0.03200739 = sum of:
            0.03200739 = weight(_text_:22 in 2649) [ClassicSimilarity], result of:
              0.03200739 = score(doc=2649,freq=2.0), product of:
                0.16545512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047248192 = queryNorm
                0.19345059 = fieldWeight in 2649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2649)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    OCLC's Crosswalk Web Service (Godby, Smith and Childress, 2008) formalizes the notion of crosswalk, as defined in Gill,et al. (n.d.), by hiding technical details and permitting the semantic equivalences to emerge as the centerpiece. One outcome is that metadata experts, who are typically not programmers, can enter the translation logic into a spreadsheet that can be automatically converted into executable code. In this paper, we describe the implementation of the Dublin Core Terms application profile in the management of crosswalks involving MARC. A crosswalk that encodes an application profile extends the typical format with two columns: one that annotates the namespace to which an element belongs, and one that annotates a 'broader-narrower' relation between a pair of elements, such as Dublin Core coverage and Dublin Core Terms spatial. This information is sufficient to produce scripts written in OCLC's Semantic Equivalence Expression Language (or Seel), which are called from the Crosswalk Web Service to generate production-grade translations. With its focus on elements that can be mixed, matched, added, and redefined, the application profile (Heery and Patel, 2000) is a natural fit with the translation model of the Crosswalk Web Service, which attempts to achieve interoperability by mapping one pair of elements at a time.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  16. Krause, J.: Heterogenität und Integration : Zur Weiterentwicklung von Inhaltserschließung und Retrieval in sich veränderten Kontexten (2001) 0.02
    0.019354446 = product of:
      0.03870889 = sum of:
        0.029653627 = weight(_text_:management in 6071) [ClassicSimilarity], result of:
          0.029653627 = score(doc=6071,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 6071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
        0.009055263 = product of:
          0.018110527 = sum of:
            0.018110527 = weight(_text_:science in 6071) [ClassicSimilarity], result of:
              0.018110527 = score(doc=6071,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.1455159 = fieldWeight in 6071, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6071)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    As an important support tool in science research, specialized information systems are rapidly changing their character. The potential for improvement compared with today's usual systems is enormous. This fact will be demonstrated by means of two problem complexes: - WWW search engines, which were developed without any government grants, are increasingly dominating the scene. Does the WWW displace information centers with their high quality databases? What are the results we can get nowadays using general WWW search engines? - In addition to the WWW and specialized databases, scientists now use WWW library catalogues of digital libraries, which combine the catalogues from an entire region or a country. At the same time, however, they are faced with highly decentralized heterogeneous databases which contain the widest range of textual sources and data, e.g. from surveys. One consequence is the presence of serious inconsistencies in quality, relevance and content analysis. Thus, the main problem to be solved is as follows: users must be supplied with heterogeneous data from different sources, modalities and content development processes via a visual user interface without inconsistencies in content development, for example, seriously impairing the quality of the search results, e. g. when phrasing their search inquiry in the terminology to which they are accustomed
    Source
    Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt
  17. Victorino, M.; Terto de Holanda, M.; Ishikawa, E.; Costa Oliveira, E.; Chhetri, S.: Transforming open data to linked open data using ontologies for information organization in big data environments of the Brazilian Government : the Brazilian database Government Open Linked Data - DBgoldbr (2018) 0.02
    0.019354446 = product of:
      0.03870889 = sum of:
        0.029653627 = weight(_text_:management in 4532) [ClassicSimilarity], result of:
          0.029653627 = score(doc=4532,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
        0.009055263 = product of:
          0.018110527 = sum of:
            0.018110527 = weight(_text_:science in 4532) [ClassicSimilarity], result of:
              0.018110527 = score(doc=4532,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.1455159 = fieldWeight in 4532, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4532)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Brazilian Government has made a massive volume of structured, semi-structured and non-structured public data available on the web to ensure that the administration is as transparent as possible. Subsequently, providing applications with enough capability to handle this "big data environment" so that vital and decisive information is readily accessible, has become a tremendous challenge. In this environment, data processing is done via new approaches in the area of information and computer science, involving technologies and processes for collecting, representing, storing and disseminating information. Along these lines, this paper presents a conceptual model, the technical architecture and the prototype implementation of a tool, denominated DBgoldbr, designed to classify government public information with the help of ontologies, by transforming open data into open linked data. To achieve this objective, we used "soft system methodology" to identify problems, to collect users needs and to design solutions according to the objectives of specific groups. The DBgoldbr tool was designed to facilitate the search for open data made available by many Brazilian government institutions, so that this data can be reused to support the evaluation and monitoring of social programs, in order to support the design and management of public policies.
  18. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.02
    0.01754127 = product of:
      0.07016508 = sum of:
        0.07016508 = sum of:
          0.02535474 = weight(_text_:science in 3283) [ClassicSimilarity], result of:
            0.02535474 = score(doc=3283,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.20372227 = fieldWeight in 3283, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3283)
          0.044810344 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
            0.044810344 = score(doc=3283,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.2708308 = fieldWeight in 3283, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3283)
      0.25 = coord(1/4)
    
    Series
    Communications in computer and information science; 672
  19. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.02
    0.01754127 = product of:
      0.07016508 = sum of:
        0.07016508 = sum of:
          0.02535474 = weight(_text_:science in 997) [ClassicSimilarity], result of:
            0.02535474 = score(doc=997,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.20372227 = fieldWeight in 997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0546875 = fieldNorm(doc=997)
          0.044810344 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
            0.044810344 = score(doc=997,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.2708308 = fieldWeight in 997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=997)
      0.25 = coord(1/4)
    
    Date
    22. 6.2023 18:23:31
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.866-878
  20. Li, K.W.; Yang, C.C.: Automatic crosslingual thesaurus generated from the Hong Kong SAR Police Department Web Corpus for Crime Analysis (2005) 0.02
    0.015483556 = product of:
      0.030967113 = sum of:
        0.023722902 = weight(_text_:management in 3391) [ClassicSimilarity], result of:
          0.023722902 = score(doc=3391,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.14896142 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=3391)
        0.0072442107 = product of:
          0.014488421 = sum of:
            0.014488421 = weight(_text_:science in 3391) [ClassicSimilarity], result of:
              0.014488421 = score(doc=3391,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.11641272 = fieldWeight in 3391, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3391)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    For the sake of national security, very large volumes of data and information are generated and gathered daily. Much of this data and information is written in different languages, stored in different locations, and may be seemingly unconnected. Crosslingual semantic interoperability is a major challenge to generate an overview of this disparate data and information so that it can be analyzed, shared, searched, and summarized. The recent terrorist attacks and the tragic events of September 11, 2001 have prompted increased attention an national security and criminal analysis. Many Asian countries and cities, such as Japan, Taiwan, and Singapore, have been advised that they may become the next targets of terrorist attacks. Semantic interoperability has been a focus in digital library research. Traditional information retrieval (IR) approaches normally require a document to share some common keywords with the query. Generating the associations for the related terms between the two term spaces of users and documents is an important issue. The problem can be viewed as the creation of a thesaurus. Apart from this, terrorists and criminals may communicate through letters, e-mails, and faxes in languages other than English. The translation ambiguity significantly exacerbates the retrieval problem. The problem is expanded to crosslingual semantic interoperability. In this paper, we focus an the English/Chinese crosslingual semantic interoperability problem. However, the developed techniques are not limited to English and Chinese languages but can be applied to many other languages. English and Chinese are popular languages in the Asian region. Much information about national security or crime is communicated in these languages. An efficient automatically generated thesaurus between these languages is important to crosslingual information retrieval between English and Chinese languages. To facilitate crosslingual information retrieval, a corpus-based approach uses the term co-occurrence statistics in parallel or comparable corpora to construct a statistical translation model to cross the language boundary. In this paper, the text based approach to align English/Chinese Hong Kong Police press release documents from the Web is first presented. We also introduce an algorithmic approach to generate a robust knowledge base based an statistical correlation analysis of the semantics (knowledge) embedded in the bilingual press release corpus. The research output consisted of a thesaurus-like, semantic network knowledge base, which can aid in semanticsbased crosslingual information management and retrieval.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.3, S.272-281

Years

Languages

  • e 80
  • d 14

Types

  • a 62
  • el 24
  • m 10
  • s 5
  • x 4
  • More… Less…