Search (110 results, page 1 of 6)

  • × theme_ss:"Semantische Interoperabilität"
  1. Hubain, R.; Wilde, M. De; Hooland, S. van: Automated SKOS vocabulary design for the biopharmaceutical industry (2016) 0.08
    0.084066235 = product of:
      0.12609935 = sum of:
        0.10616278 = weight(_text_:index in 5132) [ClassicSimilarity], result of:
          0.10616278 = score(doc=5132,freq=4.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.4779429 = fieldWeight in 5132, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5132)
        0.019936562 = product of:
          0.039873123 = sum of:
            0.039873123 = weight(_text_:classification in 5132) [ClassicSimilarity], result of:
              0.039873123 = score(doc=5132,freq=2.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.24630459 = fieldWeight in 5132, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5132)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ensuring quick and consistent access to large collections of unstructured documents is one of the biggest challenges facing knowledge-intensive organizations. Designing specific vocabularies to index and retrieve documents is often deemed too expensive, full-text search being preferred despite its known limitations. However, the process of creating controlled vocabularies can be partly automated thanks to natural language processing and machine learning techniques. With a case study from the biopharmaceutical industry, we demonstrate how small organizations can use an automated workflow in order to create a controlled vocabulary to index unstructured documents in a semantically meaningful way.
    Source
    Cataloging and classification quarterly. 54(2016) no.7, S.403-417
  2. Gödert, W.; Hubrich, J.; Boteram, F.: Thematische Recherche und Interoperabilität : Wege zur Optimierung des Zugriffs auf heterogen erschlossene Dokumente (2009) 0.06
    0.06203213 = product of:
      0.09304819 = sum of:
        0.075830564 = weight(_text_:index in 193) [ClassicSimilarity], result of:
          0.075830564 = score(doc=193,freq=4.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.3413878 = fieldWeight in 193, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=193)
        0.01721763 = product of:
          0.03443526 = sum of:
            0.03443526 = weight(_text_:22 in 193) [ClassicSimilarity], result of:
              0.03443526 = score(doc=193,freq=2.0), product of:
                0.17800546 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05083213 = queryNorm
                0.19345059 = fieldWeight in 193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=193)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    https://opus4.kobv.de/opus4-bib-info/frontdoor/index/index/searchtype/authorsearch/author/%22Hubrich%2C+Jessica%22/docId/703/start/0/rows/20
  3. Balakrishnan, U.; Krausz, A,; Voss, J.: Cocoda - ein Konkordanztool für bibliothekarische Klassifikationssysteme (2015) 0.06
    0.06004731 = product of:
      0.09007096 = sum of:
        0.075830564 = weight(_text_:index in 2030) [ClassicSimilarity], result of:
          0.075830564 = score(doc=2030,freq=4.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.3413878 = fieldWeight in 2030, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2030)
        0.014240401 = product of:
          0.028480802 = sum of:
            0.028480802 = weight(_text_:classification in 2030) [ClassicSimilarity], result of:
              0.028480802 = score(doc=2030,freq=2.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.17593184 = fieldWeight in 2030, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2030)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Cocoda (Colibri Concordance Database for library classification systems) ist ein semi-automatisches, webbasiertes Tool für die Erstellung und Verwaltung von Konkordanzen zwischen bibliothekarischen Klassifikationssystemen. Das Tool wird im Rahmen des Teilprojektes "coli-conc" (Colibri-Konkordanzerstellung) des VZG-Projektes Colibri/DDC als eine Open-Source-Software an der Verbundzentrale des Gemeinsamen Bibliotheksverbundes (VZG) entwickelt. Im Fokus des Projektes "coli-conc" steht zunächst die Konkordanzbildung zwischen der Dewey Dezimal Klassifikation (DDC) und der Regensburger Verbundklassifikation (RVK). Die inhärenten strukturellen und kulturellen Unterschiede von solch fein gegliederten bibliothekarischen Klassifikationssystemen machen den Konkordanzerstellungsprozess bei rein intellektuellem Ansatz aufwendig und zeitraubend. Um diesen zu vereinfachen und zu beschleunigen, werden die intellektuellen Schritte, die im Teilprojekt "coli-conc" eingesetzt werden, z. T. vom Konkordanztool "Cocoda" automatisch durchgeführt. Die von Cocoda erzeugten Konkordanz-Vorschläge stammen sowohl aus der automatischen Analyse der vergebenen Notationen in den Titeldatensätzen verschiedener Datenbanken als auch aus der vergleichenden Analyse der Begriffswelt der Klassifikationssysteme. Ferner soll "Cocoda" als eine Plattform für die Speicherung, Bereitstellung und Analyse von Konkordanzen dienen, um die Nutzungseffizienz der Konkordanzen zu erhöhen. In dieser Präsentation wird zuerst das Konkordanzprojekt "coli-conc", das die Basis des Konkordanztools "Cocoda" bildet, vorgestellt. Danach werden Algorithmus, Benutzeroberfläche und technische Details des Tools dargelegt. Anhand von Beispielen wird der Konkordanzerstellungsprozess mit Cocoda demonstriert.
    Source
    https://opus4.kobv.de/opus4-bib-info/frontdoor/index/index/docId/1676
  4. Vatant, B.; Dunsire, G.: Use case vocabulary merging (2010) 0.05
    0.04803785 = product of:
      0.07205677 = sum of:
        0.06066445 = weight(_text_:index in 4336) [ClassicSimilarity], result of:
          0.06066445 = score(doc=4336,freq=4.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.27311024 = fieldWeight in 4336, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=4336)
        0.0113923205 = product of:
          0.022784641 = sum of:
            0.022784641 = weight(_text_:classification in 4336) [ClassicSimilarity], result of:
              0.022784641 = score(doc=4336,freq=2.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.14074548 = fieldWeight in 4336, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4336)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The publication of library legacy includes publication of structuring vocabularies such as thesauri, classifications, subject headings. Different sources use different vocabularies, different in structure, width, depth and scope, and languages. Federated access to distributed data collections is currently possible if they rely on the same vocabularies. Mapping techniques and standards supporting them (such as SKOS mapping properties, OWL sameAs and equivalentClass) are still largely experimental, even in the linked data land. Libraries use a variety of controlled subject vocabulary and classification schemes to index items in their collections. Although most collections will employ only a single scheme, different schemes may be chosen to index different collections within a library or in separate libraries; schemes are chosen on the basis of language, subject focus (general or specific), granularity (specificity), user expectation, and availability and support (cost, currency, completeness, tools). For example, a typical academic library will operate separate metadata systems for the library's main collections, special collections (e.g. manuscripts, archives, audiovisual), digital collections, and one or more institutional repositories for teaching and research output; each of these systems may employ a different subject vocabulary, with little or no interoperability between terms and concepts. Users expect to have a single point-of-search in resource discovery services focussed on their local institutional collections. Librarians have to use complex and expensive resource discovery platforms to meet user expectations. Library communities continue to develop resource discovery services for consortia with a geographical, subject, sector (public, academic, school, special libraries), and/or domain (libraries, archives, museums) focus. Services are based on distributed searching (e.g. via Z39.50) or metadata aggregations (e.g. OCLC's WorldCat and OAISter). As a result, the number of different subject schemes encountered in such services is increasing. Trans-national consortia (e.g. Europeana) add to the complexity of the environment by including subject vocabularies in multiple languages. Users expect single point-of-search in consortial resource discovery service involving multiple organisations and large-scale metadata aggregations. Users also expect to be able to search for subjects using their own language and terms in an unambiguous, contextualised manner.
  5. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.05
    0.0473849 = product of:
      0.1421547 = sum of:
        0.1421547 = sum of:
          0.083716124 = weight(_text_:classification in 1967) [ClassicSimilarity], result of:
            0.083716124 = score(doc=1967,freq=12.0), product of:
              0.16188543 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05083213 = queryNorm
              0.5171319 = fieldWeight in 1967, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
          0.058438577 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
            0.058438577 = score(doc=1967,freq=4.0), product of:
              0.17800546 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05083213 = queryNorm
              0.32829654 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  6. Golub, K.: Subject access in Swedish discovery services (2018) 0.05
    0.04524047 = product of:
      0.0678607 = sum of:
        0.0536203 = weight(_text_:index in 4379) [ClassicSimilarity], result of:
          0.0536203 = score(doc=4379,freq=2.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.24139762 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4379)
        0.014240401 = product of:
          0.028480802 = sum of:
            0.028480802 = weight(_text_:classification in 4379) [ClassicSimilarity], result of:
              0.028480802 = score(doc=4379,freq=2.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.17593184 = fieldWeight in 4379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    While support for subject searching has been traditionally advocated for in library catalogs, often in the form of a catalog objective to find everything that a library has on a certain topic, research has shown that subject access has not been satisfactory. Many existing online catalogs and discovery services do not seem to make good use of the intellectual effort invested into assigning controlled subject index terms and classes. For example, few support hierarchical browsing of classification schemes and other controlled vocabularies with hierarchical structures, few provide end-user-friendly options to choose a more specific concept to increase precision, a broader concept or related concepts to increase recall, to disambiguate homonyms, or to find which term is best used to name a concept. Optimum subject access in library catalogs and discovery services is analyzed from the perspective of earlier research as well as contemporary conceptual models and cataloguing codes. Eighteen proposed features of what this should entail in practice are drawn. In an exploratory qualitative study, the three most common discovery services used in Swedish academic libraries are analyzed against these features. In line with previous research, subject access in contemporary interfaces is demonstrated to less than optimal. This is in spite of the fact that individual collections have been indexed with controlled vocabularies and a significant number of controlled vocabularies have been mapped to each other and are available in interoperable standards. Strategic action is proposed to build research-informed (inter)national standards and guidelines.
  7. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2014) 0.04
    0.043084897 = product of:
      0.12925468 = sum of:
        0.12925468 = sum of:
          0.08055587 = weight(_text_:classification in 1962) [ClassicSimilarity], result of:
            0.08055587 = score(doc=1962,freq=16.0), product of:
              0.16188543 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05083213 = queryNorm
              0.49761042 = fieldWeight in 1962, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1962)
          0.04869881 = weight(_text_:22 in 1962) [ClassicSimilarity], result of:
            0.04869881 = score(doc=1962,freq=4.0), product of:
              0.17800546 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05083213 = queryNorm
              0.27358043 = fieldWeight in 1962, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1962)
      0.33333334 = coord(1/3)
    
    Abstract
    This article reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The article discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and/or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the Dewey Decimal Classification [DDC] (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Cataloging and classification quarterly. 52(2014) no.1, S.90-101
  8. Heckner, M.; Mühlbacher, S.; Wolff, C.: Tagging tagging : a classification model for user keywords in scientific bibliography management systems (2007) 0.04
    0.039338276 = product of:
      0.059007414 = sum of:
        0.04289624 = weight(_text_:index in 533) [ClassicSimilarity], result of:
          0.04289624 = score(doc=533,freq=2.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.1931181 = fieldWeight in 533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=533)
        0.016111175 = product of:
          0.03222235 = sum of:
            0.03222235 = weight(_text_:classification in 533) [ClassicSimilarity], result of:
              0.03222235 = score(doc=533,freq=4.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.19904417 = fieldWeight in 533, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.03125 = fieldNorm(doc=533)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Recently, a growing amount of systems that allow personal content annotation (tagging) are being created, ranging from personal sites for organising bookmarks (del.icio.us), photos (flickr.com) or videos (video.google.com, youtube.com) to systems for managing bibliographies for scientific research projects (citeulike.org, connotea.org). Simultaneously, a debate on the pro and cons of allowing users to add personal keywords to digital content has arisen. One recurrent point-of-discussion is whether tagging can solve the well-known vocabulary problem: In order to support successful retrieval in complex environments, it is necessary to index an object with a variety of aliases (cf. Furnas 1987). In this spirit, social tagging enhances the pool of rigid, traditional keywording by adding user-created retrieval vocabularies. Furthermore, tagging goes beyond simple personal content-based keywords by providing meta-keywords like funny or interesting that "identify qualities or characteristics" (Golder and Huberman 2006, Kipp and Campbell 2006, Kipp 2007, Feinberg 2006, Kroski 2005). Contrarily, tagging systems are claimed to lead to semantic difficulties that may hinder the precision and recall of tagging systems (e.g. the polysemy problem, cf. Marlow 2006, Lakoff 2005, Golder and Huberman 2006). Empirical research on social tagging is still rare and mostly from a computer linguistics or librarian point-of-view (Voß 2007) which focus either on the automatic statistical analyses of large data sets, or intellectually inspect single cases of tag usage: Some scientists studied the evolution of tag vocabularies and tag distribution in specific systems (Golder and Huberman 2006, Hammond 2005). Others concentrate on tagging behaviour and tagger characteristics in collaborative systems. (Hammond 2005, Kipp and Campbell 2007, Feinberg 2006, Sen 2006). However, little research has been conducted on the functional and linguistic characteristics of tags.1 An analysis of these patterns could show differences between user wording and conventional keywording. In order to provide a reasonable basis for comparison, a classification system for existing tags is needed.
  9. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.04
    0.0355907 = product of:
      0.1067721 = sum of:
        0.1067721 = sum of:
          0.048333526 = weight(_text_:classification in 4379) [ClassicSimilarity], result of:
            0.048333526 = score(doc=4379,freq=4.0), product of:
              0.16188543 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05083213 = queryNorm
              0.29856625 = fieldWeight in 4379, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.046875 = fieldNorm(doc=4379)
          0.058438577 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
            0.058438577 = score(doc=4379,freq=4.0), product of:
              0.17800546 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05083213 = queryNorm
              0.32829654 = fieldWeight in 4379, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4379)
      0.33333334 = coord(1/3)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  10. Köbler, J.; Niederklapfer, T.: Kreuzkonkordanzen zwischen RVK-BK-MSC-PACS der Fachbereiche Mathematik un Physik (2010) 0.03
    0.033506185 = product of:
      0.10051855 = sum of:
        0.10051855 = sum of:
          0.059196237 = weight(_text_:classification in 4408) [ClassicSimilarity], result of:
            0.059196237 = score(doc=4408,freq=6.0), product of:
              0.16188543 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05083213 = queryNorm
              0.3656675 = fieldWeight in 4408, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.046875 = fieldNorm(doc=4408)
          0.04132231 = weight(_text_:22 in 4408) [ClassicSimilarity], result of:
            0.04132231 = score(doc=4408,freq=2.0), product of:
              0.17800546 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05083213 = queryNorm
              0.23214069 = fieldWeight in 4408, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4408)
      0.33333334 = coord(1/3)
    
    Abstract
    Unser Projekt soll eine Kreuzkonkordanz zwischen den Universalklassifikationen wie der "Regensburger Verbundsklassifikation (RVK)" und der "Basisklassifikation (BK)" sowie den Fachklassifikationen "Mathematics Subject Classification (MSC2010)" und "Physics and Astronomy Classification Scheme (PACS2010)" in den Fachgebieten Mathematik und Physik herstellen. Fazit: "Die klassifikatorische Übereinstmmung zwischen Regensburger Verbundklassifikation und Physics and Astronomy Classification Scheme war in einzelnen Fachbereichen (z. B. Kernphysik) recht gut. Doch andere Fachbereiche (z.B. Polymerphysik, Mineralogie) stimmten sehr wenig überein. Insgesamt konnten wir 890 einfache Verbindungen erstellen. Mehrfachverbindungen wurden aus technischen Gründen nicht mitgezählt. Das Projekt war insgesamt sehr umfangreich, daher konnte es im Rahmen der zwanzig Projekttage nicht erschöpfend behandelt werden. Eine Weiterentwicklung, insbesondere hinsichtlich des kollektiven Zuganges in Form eines Webformulars und der automatischen Klassifizierung erscheint jedoch sinnvoll."
    Pages
    22 S
  11. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.03
    0.031396907 = product of:
      0.09419072 = sum of:
        0.09419072 = product of:
          0.28257215 = sum of:
            0.28257215 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.28257215 = score(doc=306,freq=2.0), product of:
                0.43095535 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05083213 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  12. Dunsire, G.; Nicholson, D.: Signposting the crossroads : terminology Web services and classification-based interoperability (2010) 0.03
    0.030465623 = product of:
      0.09139687 = sum of:
        0.09139687 = sum of:
          0.056961603 = weight(_text_:classification in 4066) [ClassicSimilarity], result of:
            0.056961603 = score(doc=4066,freq=8.0), product of:
              0.16188543 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05083213 = queryNorm
              0.35186368 = fieldWeight in 4066, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4066)
          0.03443526 = weight(_text_:22 in 4066) [ClassicSimilarity], result of:
            0.03443526 = score(doc=4066,freq=2.0), product of:
              0.17800546 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05083213 = queryNorm
              0.19345059 = fieldWeight in 4066, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4066)
      0.33333334 = coord(1/3)
    
    Abstract
    The focus of this paper is the provision of terminology- and classification-based terminologies interoperability data via web services, initially using interoperability data based on the use of a Dewey Decimal Classification (DDC) spine, but with an aim to explore other possibilities in time, including the use of other spines. The High-Level Thesaurus Project (HILT) Phase IV developed pilot web services based on SRW/U, SOAP, and SKOS to deliver machine-readable terminology and crossterminology mappings data likely to be useful to information services wishing to enhance their subject search or browse services. It also developed an associated toolkit to help information services technical staff to embed HILT-related functionality within service interfaces. Several UK information services have created illustrative user interface enhancements using HILT functionality and these will demonstrate what is possible. HILT currently has the following subject schemes mounted and available: DDC, CAB, GCMD, HASSET, IPSV, LCSH, MeSH, NMR, SCAS, UNESCO, and AAT. It also has high level mappings between some of these schemes and DDC and some deeper pilot mappings available.
    Content
    Teil von: Papers from Classification at a Crossroads: Multiple Directions to Usability: International UDC Seminar 2009-Part 2
    Date
    6. 1.2011 19:22:48
  13. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.03
    0.029360829 = product of:
      0.088082485 = sum of:
        0.088082485 = sum of:
          0.039873123 = weight(_text_:classification in 540) [ClassicSimilarity], result of:
            0.039873123 = score(doc=540,freq=2.0), product of:
              0.16188543 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05083213 = queryNorm
              0.24630459 = fieldWeight in 540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0546875 = fieldNorm(doc=540)
          0.04820936 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
            0.04820936 = score(doc=540,freq=2.0), product of:
              0.17800546 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05083213 = queryNorm
              0.2708308 = fieldWeight in 540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=540)
      0.33333334 = coord(1/3)
    
    Abstract
    With the increasing requirement of establishing semantic mappings between different vocabularies, further development of these encoding formats is becoming more and more important. For this reason, four types of knowledge representation formats were assessed:MARC21 for Classification Data in XML, Zthes XML Schema, XTM(XML Topic Map), and SKOS (Simple Knowledge Organisation System). This paper explores the potential of adapting these representation formats to support different semantic mapping methods, and discusses the implication of extending them to represent more complex KOS.
    Date
    26.12.2011 13:22:27
  14. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.03
    0.027921822 = product of:
      0.08376546 = sum of:
        0.08376546 = sum of:
          0.049330197 = weight(_text_:classification in 3628) [ClassicSimilarity], result of:
            0.049330197 = score(doc=3628,freq=6.0), product of:
              0.16188543 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05083213 = queryNorm
              0.3047229 = fieldWeight in 3628, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3628)
          0.03443526 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
            0.03443526 = score(doc=3628,freq=2.0), product of:
              0.17800546 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05083213 = queryNorm
              0.19345059 = fieldWeight in 3628, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3628)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
    Object
    ACM Computing Classification
  15. Levergood, B.; Farrenkopf, S.; Frasnelli, E.: ¬The specification of the language of the field and interoperability : cross-language access to catalogues and online libraries (CACAO) (2008) 0.03
    0.025166426 = product of:
      0.075499274 = sum of:
        0.075499274 = sum of:
          0.034176964 = weight(_text_:classification in 2646) [ClassicSimilarity], result of:
            0.034176964 = score(doc=2646,freq=2.0), product of:
              0.16188543 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05083213 = queryNorm
              0.21111822 = fieldWeight in 2646, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.046875 = fieldNorm(doc=2646)
          0.04132231 = weight(_text_:22 in 2646) [ClassicSimilarity], result of:
            0.04132231 = score(doc=2646,freq=2.0), product of:
              0.17800546 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05083213 = queryNorm
              0.23214069 = fieldWeight in 2646, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2646)
      0.33333334 = coord(1/3)
    
    Abstract
    The CACAO Project (Cross-language Access to Catalogues and Online Libraries) has been designed to implement natural language processing and cross-language information retrieval techniques to provide cross-language access to information in libraries, a critical issue in the linguistically diverse European Union. This project report addresses two metadata-related challenges for the library community in this context: "false friends" (identical words having different meanings in different languages) and term ambiguity. The possible solutions involve enriching the metadata with attributes specifying language or the source authority file, or associating potential search terms to classes in a classification system. The European Library will evaluate an early implementation of this work in late 2008.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  16. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.02
    0.022426363 = product of:
      0.067279086 = sum of:
        0.067279086 = product of:
          0.20183724 = sum of:
            0.20183724 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.20183724 = score(doc=1000,freq=2.0), product of:
                0.43095535 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05083213 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  17. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.02
    0.016069788 = product of:
      0.04820936 = sum of:
        0.04820936 = product of:
          0.09641872 = sum of:
            0.09641872 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.09641872 = score(doc=8365,freq=2.0), product of:
                0.17800546 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05083213 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2015 16:08:38
  18. Coen, G.; Smiraglia, R.P.: Toward better interoperability of the NARCIS classification (2019) 0.02
    0.015743356 = product of:
      0.04723007 = sum of:
        0.04723007 = product of:
          0.09446014 = sum of:
            0.09446014 = weight(_text_:classification in 5399) [ClassicSimilarity], result of:
              0.09446014 = score(doc=5399,freq=22.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.58349997 = fieldWeight in 5399, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5399)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Research information can be useful to science stake-holders for discovering, evaluating and planning research activities. In the Netherlands, the institute tasked with the stewardship of national research information is DANS (Data Archiving and Networked Services). DANS is the home of NARCIS, the national portal for research information, which uses a similarly named national research classification. The NARCIS Classification assigns symbols to represent the knowledge bases of contributing scholars. A recent research stream in knowledge organization known as comparative classification uses two or more classifications experimentally to generate empirical evidence about coverage of conceptual content, population of the classes, and economy of classification. This paper builds on that research in order to further understand the comparative impact of the NARCIS Classification alongside a classification designed specifically for information resources. Our six cases come from the DANS project Knowledge Organization System Observatory (KOSo), which itself is classified using the Information Coding Classification (ICC) created in 1982 by Ingetraut Dahlberg. ICC is considered to have the merits of universality, faceting, and a top-down approach. Results are exploratory, indicating that both classifications provide fairly precise coverage. The inflexibility of the NARCIS Classification makes it difficult to express complex concepts. The meta-ontological, epistemic stance of the ICC is apparent in all aspects of this study. Using the two together in the DANS KOS Observatory will provide users with both clarity of scientific positioning and ontological relativity.
    Footnote
    Beitrag eines Special Issue: Research Information Systems and Science Classifications; including papers from "Trajectories for Research: Fathoming the Promise of the NARCIS Classification," 27-28 September 2018, The Hague, The Netherlands.
    Object
    NARCIS classification
  19. Hider, P.; Coe, M.: Academic disciplines in the context of library classification : mapping university faculty structures to the DDC and LCC schemes (2022) 0.01
    0.014859837 = product of:
      0.04457951 = sum of:
        0.04457951 = product of:
          0.08915902 = sum of:
            0.08915902 = weight(_text_:classification in 709) [ClassicSimilarity], result of:
              0.08915902 = score(doc=709,freq=10.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.55075383 = fieldWeight in 709, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=709)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We investigated the extent to which the Dewey Decimal Classification (DDC) and the Library of Congress Classification reflect the organizational structures of Australian universities. The mapping of the faculty structures of ten universities to the two schemes showed strong alignment, with very few fields represented in the names of the organizational units not covered at all by either bibliographic scheme. This suggests a degree of universality and "scientific and educational consensus" with respect to both the schemes and academic disciplines. The article goes on to discuss the concept of discipline and its application in bibliographic classification.
    Source
    Cataloging and classification quarterly. 60(2022) no.2, p.194-213
  20. Angjeli, A.; Isaac, A.: Semantic web and vocabularies interoperability : an experiment with illuminations collections (2008) 0.01
    0.014298747 = product of:
      0.04289624 = sum of:
        0.04289624 = weight(_text_:index in 2324) [ClassicSimilarity], result of:
          0.04289624 = score(doc=2324,freq=2.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.1931181 = fieldWeight in 2324, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=2324)
      0.33333334 = coord(1/3)
    
    Abstract
    During the years 2006 and 2007, the BnF has collaborated with the National Library of the Netherlands within the framework of the Dutch project STITCH. This project, through concrete experiments, investigates semantic interoperability, especially in relation to searching. How can we conduct semantic searches across several digital heritage collections? The metadata related to content analysis are often heterogeneous. Beyond using manual mapping of semantically similar entities, STITCH explores the techniques of the semantic web, particularly ontology mapping. This paper is about an experiment made on two digital iconographic collections: Mandragore, iconographic database of the Manuscript Department of the BnF, and the Medieval Illuminated manuscripts collection of the KB. While the content of these two collections is similar, they have been processed differently and the vocabularies used to index their content is very different. Vocabularies in Mandragore and Iconclass are both controlled and hierarchical but they do not have the same semantic and structure. This difference is of particular interest to the STITCH project, as it aims to study automatic alignment of two vocabularies. The collaborative experiment started with a precise analysis of each of the vocabularies; that included concepts and their representation, lexical properties of the terms used, semantic relationships, etc. The team of Dutch researchers then studied and implemented mechanisms of alignment of the two vocabularies. The initial models being different, there had to be a common standard in order to enable procedures of alignment. RDF and SKOS were selected for that. The experiment lead to building a prototype that allows for querying in both databases at the same time through a single interface. The descriptors of each vocabulary are used as search terms for all images regardless of the collection they belong to. This experiment is only one step in the search for solutions that aim at making navigation easier between heritage collections that have heterogeneous metadata.

Years

Languages

  • e 96
  • d 14

Types

  • a 76
  • el 33
  • m 5
  • s 2
  • x 2
  • n 1
  • p 1
  • r 1
  • More… Less…