Search (61 results, page 2 of 4)

  • × theme_ss:"Semantische Interoperabilität"
  • × type_ss:"el"
  1. Strötgen, R.: Anfragetransfers zur Integration von Internetquellen in Digitalen Bibliotheken auf der Grundlage statistischer Termrelationen (2007) 0.00
    0.0012581941 = product of:
      0.018872911 = sum of:
        0.018872911 = weight(_text_:und in 588) [ClassicSimilarity], result of:
          0.018872911 = score(doc=588,freq=18.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.29385152 = fieldWeight in 588, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=588)
      0.06666667 = coord(1/15)
    
    Abstract
    In Digitalen Bibliotheken als integrierten Zugängen zu in der Regel mehreren verschiedenen Dokumentsammlungen tritt Heterogenität in vielerlei Spielarten auf: - als technische Heterogenität durch das Zusammenspiel verschiedener Betriebs-, Datenbank- oder Softwaresysteme, - als strukturelle Heterogenität durch das Auftreten verschiedener Dokumentstrukturen und Metadaten-Standards und schließlich - als semantische Heterogenität, wenn Dokumente mit Hilfe unterschiedlicher Ontologien (hier verwendet im weiteren Sinn von Dokumentationssprachen wie Thesauri und Klassifikationen) erschlossen wurden oder aber Dokumente überhaupt nicht mit Metadaten ausgezeichnet wurden. Semantische Heterogenität lässt sich behandeln, indem die Standardisierung von Metadaten (z.B. von der Dublin Core Metadata Initiative oder das Resource Description Framework (RDF) im Kontext des Semantic Web) vorangetrieben und ihre Verwendung gefördert wird. Allerdings besteht auf Grund der unterschiedlichen Interessen aller beteiligten Partner (u.a. Bibliotheken, Dokumentationsstellen, Datenbankproduzenten, "freie" Anbieter von Dokumentsammlungen und Datenbanken) kaum die Aussicht, dass sich durch diese Standardisierung semantische Heterogenität restlos beseitigen lässt. Insbesondere ist eine einheitliche Verwendung von Vokabularen und Ontologien nicht in Sicht. Im Projekt CARMEN wurde unter anderem das Problem der semantischen Heterogenität einerseits durch die automatische Extraktion von Metadaten aus Internetdokumenten und andererseits durch Systeme zur Transformation von Anfragen über Cross-Konkordanzen und statistisch erzeugte Relationen angegangen. Ein Teil der Ergebnisse der Arbeiten am IZ Sozialwissenschaften waren statistische Relationen zwischen Deskriptoren, die mittels Kookurrenzbeziehungen berechnet wurden. Diese Relationen wurden dann für die Übersetzung von Anfragen genutzt, um zwischen verschiedenen Ontologien oder auch Freitexttermen zu vermitteln. Das Ziel dieser Übersetzung ist die Verbesserung des (automatischen) Überstiegs zwischen unterschiedlich erschlossenen Dokumentbeständen, z.B. Fachdatenbanken und Internetdokumenten, als Lösungsansatz zur Behandlung semantischer Heterogenität.
  2. Hall, M.M.: Automatisierte semantische Harmonisierung von Landnutzungsdaten (2006) 0.00
    0.0011862369 = product of:
      0.017793551 = sum of:
        0.017793551 = weight(_text_:und in 4177) [ClassicSimilarity], result of:
          0.017793551 = score(doc=4177,freq=4.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.27704588 = fieldWeight in 4177, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4177)
      0.06666667 = coord(1/15)
    
    Abstract
    Spatial Data Infrastructures erleichtern den Zugriff auf Daten aus verteilten Datenquellen. Um diese Kombination von Datenquellen zu ermöglichen, müssen diese Datenquellen syntaktisch und semantisch harmonisiert werden. Für die semantische Harmonisierung ist es notwendig, dass ein semantisches Ähnlichkeitsmaß definiert wird, das den Vergleich zweier Konzepte möglich macht. Diese Arbeit beschreibt ein derartiges Ähnlichkeitsmaß und wie es auf den Bereich der Landnutzungsdaten angewandt werden kann.
  3. Schubert, C.; Kinkeldey, C.; Reich, H.: Handbuch Datenbankanwendung zur Wissensrepräsentation im Verbundprojekt DeCOVER (2006) 0.00
    0.0011862369 = product of:
      0.017793551 = sum of:
        0.017793551 = weight(_text_:und in 4256) [ClassicSimilarity], result of:
          0.017793551 = score(doc=4256,freq=4.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.27704588 = fieldWeight in 4256, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4256)
      0.06666667 = coord(1/15)
    
    Abstract
    Die Datenbank basierte Objektartenbeschreibung dient zur eigenschaftsbasierten Aufnahme aller Objektarten der Kataloge BNTK, CLC; GMES M 2.1, ATKIS und des DeCOVER Vorschlags. Das Ziel der Datenbankanwendung besteht in der 'manuellen' Beziehungsauswertung und Darstellung der gesamten Objektarten bezogen auf die erstellte Wissensrepräsentation. Anhand einer hierarchisch strukturierten Wissensrepräsentation lassen sich mit Ontologien Überführungen von Objektarten verwirklichen, die im Sinne der semantischen Interoperabilität als Zielstellung in dem Verbundprojekt DeCOVER besteht.
  4. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.00
    0.0011104764 = product of:
      0.016657146 = sum of:
        0.016657146 = product of:
          0.03331429 = sum of:
            0.03331429 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.03331429 = score(doc=1967,freq=4.0), product of:
                0.101476215 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028978055 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  5. Kless, D.: From a thesaurus standard to a general knowledge organization standard?! (2007) 0.00
    0.0010484952 = product of:
      0.015727427 = sum of:
        0.015727427 = weight(_text_:und in 528) [ClassicSimilarity], result of:
          0.015727427 = score(doc=528,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.24487628 = fieldWeight in 528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=528)
      0.06666667 = coord(1/15)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  6. Cochard, N.: ¬A data model and XML schema for BS 8723-5 (2007) 0.00
    0.0010484952 = product of:
      0.015727427 = sum of:
        0.015727427 = weight(_text_:und in 532) [ClassicSimilarity], result of:
          0.015727427 = score(doc=532,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.24487628 = fieldWeight in 532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=532)
      0.06666667 = coord(1/15)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  7. Fayen, E.; Hlava, M.: Crosswalks and the USA perspective (2007) 0.00
    0.0010484952 = product of:
      0.015727427 = sum of:
        0.015727427 = weight(_text_:und in 536) [ClassicSimilarity], result of:
          0.015727427 = score(doc=536,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.24487628 = fieldWeight in 536, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=536)
      0.06666667 = coord(1/15)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Svensson, L.: Panta rei : die Versionierung der DDC - Probleme, Anforderungen und mögliche Lösungen (2010) 0.00
    0.0010484952 = product of:
      0.015727427 = sum of:
        0.015727427 = weight(_text_:und in 2340) [ClassicSimilarity], result of:
          0.015727427 = score(doc=2340,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.24487628 = fieldWeight in 2340, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=2340)
      0.06666667 = coord(1/15)
    
  9. Faro, S.; Francesconi, E.; Marinai, E.; Sandrucci, V.: Report on execution and results of the interoperability tests (2008) 0.00
    0.0010469672 = product of:
      0.015704507 = sum of:
        0.015704507 = product of:
          0.031409014 = sum of:
            0.031409014 = weight(_text_:22 in 7411) [ClassicSimilarity], result of:
              0.031409014 = score(doc=7411,freq=2.0), product of:
                0.101476215 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028978055 = queryNorm
                0.30952093 = fieldWeight in 7411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7411)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Date
    7.11.2008 10:40:22
  10. Faro, S.; Francesconi, E.; Sandrucci, V.: Thesauri KOS analysis and selected thesaurus mapping methodology on the project case-study (2007) 0.00
    0.0010469672 = product of:
      0.015704507 = sum of:
        0.015704507 = product of:
          0.031409014 = sum of:
            0.031409014 = weight(_text_:22 in 2227) [ClassicSimilarity], result of:
              0.031409014 = score(doc=2227,freq=2.0), product of:
                0.101476215 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028978055 = queryNorm
                0.30952093 = fieldWeight in 2227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2227)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Date
    7.11.2008 10:40:22
  11. Hinrichs, I.; Milmeister, G.; Schäuble, P.; Steenweg, H.: Computerunterstützte Sacherschließung mit dem Digitalen Assistenten (DA-2) (2016) 0.00
    0.0010379571 = product of:
      0.015569357 = sum of:
        0.015569357 = weight(_text_:und in 3563) [ClassicSimilarity], result of:
          0.015569357 = score(doc=3563,freq=4.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.24241515 = fieldWeight in 3563, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3563)
      0.06666667 = coord(1/15)
    
    Abstract
    Wir beschreiben den Digitalen Assistenten DA-2, den wir zur Unterstützung der Sacherschliessung im IBS-Verbund implementiert haben. Diese webbasierte Anwendung ist eine vollständige Neuimplementierung, welche die Erkenntnisse des Vorgängersystems bei der Zentralbibliothek Zürich berücksichtigt. Wir stellen Überlegungen zur Zukunft der Sacherschliessung an und geben eine Übersicht über Projekte mit ähnlichen Zielsetzungen, die Sacherschließung mit Computerunterstützung effizienter und besser zu gestalten.
  12. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.00
    9.1609627E-4 = product of:
      0.013741443 = sum of:
        0.013741443 = product of:
          0.027482886 = sum of:
            0.027482886 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
              0.027482886 = score(doc=540,freq=2.0), product of:
                0.101476215 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028978055 = queryNorm
                0.2708308 = fieldWeight in 540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=540)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Date
    26.12.2011 13:22:27
  13. Balakrishnan, U.: DFG-Projekt: Coli-conc : das Mapping Tool "Cocoda" (2016) 0.00
    8.387961E-4 = product of:
      0.012581941 = sum of:
        0.012581941 = weight(_text_:und in 3036) [ClassicSimilarity], result of:
          0.012581941 = score(doc=3036,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.19590102 = fieldWeight in 3036, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=3036)
      0.06666667 = coord(1/15)
    
    Abstract
    Der Beitrag beschreibt das Projekt Coli-conc, das von der Verbundzentrale des Gemeinsamen Bibliotheksverbunds (GBV) betreut wird. Ziel ist die Entwicklung einer Infrastruktur für den Austausch, die Erstellung und die Wartung von Konkordanzen zwischen bibliothekarischen Wissensorganisationssystemen.
  14. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.00
    6.5435446E-4 = product of:
      0.009815317 = sum of:
        0.009815317 = product of:
          0.019630633 = sum of:
            0.019630633 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
              0.019630633 = score(doc=3628,freq=2.0), product of:
                0.101476215 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028978055 = queryNorm
                0.19345059 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  15. Suominen, O.; Hyvönen, N.: From MARC silos to Linked Data silos? (2017) 0.00
    6.2909705E-4 = product of:
      0.009436456 = sum of:
        0.009436456 = weight(_text_:und in 3732) [ClassicSimilarity], result of:
          0.009436456 = score(doc=3732,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.14692576 = fieldWeight in 3732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3732)
      0.06666667 = coord(1/15)
    
    Abstract
    Seit einiger Zeit stellen Bibliotheken ihre bibliografischen Metadadaten verstärkt offen in Form von Linked Data zur Verfügung. Dabei kommen jedoch ganz unterschiedliche Modelle für die Strukturierung der bibliografischen Daten zur Anwendung. Manche Bibliotheken verwenden ein auf FRBR basierendes Modell mit mehreren Schichten von Entitäten, während andere flache, am Datensatz orientierte Modelle nutzen. Der Wildwuchs bei den Datenmodellen erschwert die Nachnutzung der bibliografischen Daten. Im Ergebnis haben die Bibliotheken die früheren MARC-Silos nur mit zueinander inkompatiblen Linked-Data-Silos vertauscht. Deshalb ist es häufig schwierig, Datensets miteinander zu kombinieren und nachzunutzen. Kleinere Unterschiede in der Datenmodellierung lassen sich zwar durch Schema Mappings in den Griff bekommen, doch erscheint es fraglich, ob die Interoperabilität insgesamt zugenommen hat. Der Beitrag stellt die Ergebnisse einer Studie zu verschiedenen veröffentlichten Sets von bibliografischen Daten vor. Dabei werden auch die unterschiedlichen Modelle betrachtet, um bibliografische Daten als RDF darzustellen, sowie Werkzeuge zur Erzeugung von entsprechenden Daten aus dem MARC-Format. Abschließend wird der von der Finnischen Nationalbibliothek verfolgte Ansatz behandelt.
  16. Chen, H.: Semantic research for digital libraries (1999) 0.00
    5.9199263E-4 = product of:
      0.008879889 = sum of:
        0.008879889 = product of:
          0.017759778 = sum of:
            0.017759778 = weight(_text_:information in 1247) [ClassicSimilarity], result of:
              0.017759778 = score(doc=1247,freq=18.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.34911853 = fieldWeight in 1247, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1247)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    In this era of the Internet and distributed, multimedia computing, new and emerging classes of information systems applications have swept into the lives of office workers and people in general. From digital libraries, multimedia systems, geographic information systems, and collaborative computing to electronic commerce, virtual reality, and electronic video arts and games, these applications have created tremendous opportunities for information and computer science researchers and practitioners. As applications become more pervasive, pressing, and diverse, several well-known information retrieval (IR) problems have become even more urgent. Information overload, a result of the ease of information creation and transmission via the Internet and WWW, has become more troublesome (e.g., even stockbrokers and elementary school students, heavily exposed to various WWW search engines, are versed in such IR terminology as recall and precision). Significant variations in database formats and structures, the richness of information media (text, audio, and video), and an abundance of multilingual information content also have created severe information interoperability problems -- structural interoperability, media interoperability, and multilingual interoperability.
  17. Nicholson, D.: Help us make HILT's terminology services useful in your information service (2008) 0.00
    4.0279995E-4 = product of:
      0.006041999 = sum of:
        0.006041999 = product of:
          0.012083998 = sum of:
            0.012083998 = weight(_text_:information in 3654) [ClassicSimilarity], result of:
              0.012083998 = score(doc=3654,freq=12.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23754507 = fieldWeight in 3654, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3654)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The JISC-funded HILT project is looking to make contact with staff in information services or projects interested in helping it test and refine its developing terminology services. The project is currently working to create pilot web services that will deliver machine-readable terminology and cross-terminology mappings data likely to be useful to information services wishing to extend or enhance the efficacy of their subject search or browse services. Based on SRW/U, SOAP, and SKOS, the HILT facilities, when fully operational, will permit such services to improve their own subject search and browse mechanisms by using HILT data in a fashion transparent to their users. On request, HILT will serve up machine-processable data on individual subject schemes (broader terms, narrower terms, hierarchy information, preferred and non-preferred terms, and so on) and interoperability data (usually intellectual or automated mappings between schemes, but the architecture allows for the use of other methods) - data that can be used to enhance user services. The project is also developing an associated toolkit that will help service technical staff to embed HILT-related functionality into their services. The primary aim is to serve JISC funded information services or services at JISC institutions, but information services outside the JISC domain may also find the proposed services useful and wish to participate in the test and refine process.
  18. Mayr, P.; Petras, V.: Cross-concordances : terminology mapping and its effectiveness for information retrieval (2008) 0.00
    3.4178712E-4 = product of:
      0.0051268064 = sum of:
        0.0051268064 = product of:
          0.010253613 = sum of:
            0.010253613 = weight(_text_:information in 2323) [ClassicSimilarity], result of:
              0.010253613 = score(doc=2323,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.20156369 = fieldWeight in 2323, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2323)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The German Federal Ministry for Education and Research funded a major terminology mapping initiative, which found its conclusion in 2007. The task of this terminology mapping initiative was to organize, create and manage 'cross-concordances' between controlled vocabularies (thesauri, classification systems, subject heading lists) centred around the social sciences but quickly extending to other subject areas. 64 crosswalks with more than 500,000 relations were established. In the final phase of the project, a major evaluation effort to test and measure the effectiveness of the vocabulary mappings in an information system environment was conducted. The paper reports on the cross-concordance work and evaluation results.
    Content
    Beitrag während: World library and information congress: 74th IFLA general conference and council, 10-14 August 2008, Québec, Canada.
  19. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.00
    3.2223997E-4 = product of:
      0.004833599 = sum of:
        0.004833599 = product of:
          0.009667198 = sum of:
            0.009667198 = weight(_text_:information in 3109) [ClassicSimilarity], result of:
              0.009667198 = score(doc=3109,freq=12.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.19003606 = fieldWeight in 3109, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3109)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
  20. Balakrishnan, U.; Voß, J.: ¬The Cocoda mapping tool (2015) 0.00
    2.8195998E-4 = product of:
      0.0042293994 = sum of:
        0.0042293994 = product of:
          0.008458799 = sum of:
            0.008458799 = weight(_text_:information in 4205) [ClassicSimilarity], result of:
              0.008458799 = score(doc=4205,freq=12.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.16628155 = fieldWeight in 4205, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4205)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Since the 90s, we have seen an explosion of information and with it there is an increase in the need for data and information aggregation systems that store and manage information. However, most of the information sources apply different Knowledge Organizations Systems (KOS) to describe the content of stored data. This heterogeneous mix of KOS in different systems complicate access and seamless sharing of information and knowledge. Concordances also known as cross-concordances or terminology mappings map different (KOS) to each other for improvement of information retrieval in such heterogeneous mix of systems. (Mayr 2010, Keil 2012). Also for coherent indexing with different terminologies, mappings are considered to be a valuable and essential working tool. However, despite efforts at standardization (e.g. SKOS, ISO 25964-2, Keil 2012, Soergel 2011); there is a significant scarcity of concordances that has led an inability to establish uniform exchange formats as well as methods and tools for maintaining mappings and making them easily accessible. This is particularly true in the field of library classification schemes. In essence, there is a lack of infrastructure for provision/exchange of concordances, their management and quality assessment as well as tools that would enable semi-automatic generation of mappings. The project "coli-conc" therefore aims to address this gap by creating the necessary infrastructure. This includes the specification of a data format for exchange of concordances (JSKOS), specification and implementation of web APIs to query concordance databases (JSKOS-API), and a modular web application to enable uniform access to knowledge organization systems, concordances and concordance assessments (Cocoda).

Languages

  • e 43
  • d 16

Types