Search (158 results, page 1 of 8)

  • × theme_ss:"Semantische Interoperabilität"
  1. Woldering, B.: ¬Die Europäische Digitale Bibliothek nimmt Gestalt an (2007) 0.04
    0.043479174 = product of:
      0.08695835 = sum of:
        0.08219909 = weight(_text_:digitale in 2439) [ClassicSimilarity], result of:
          0.08219909 = score(doc=2439,freq=8.0), product of:
            0.18027179 = queryWeight, product of:
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.034944877 = queryNorm
            0.45597312 = fieldWeight in 2439, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.03125 = fieldNorm(doc=2439)
        0.004759258 = weight(_text_:information in 2439) [ClassicSimilarity], result of:
          0.004759258 = score(doc=2439,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0775819 = fieldWeight in 2439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2439)
      0.5 = coord(2/4)
    
    Abstract
    Der Aufbau der Europäischen Digitalen Bibliothek wurde im Herbst 2007 auf soliden Grund gestellt: Mit der European Digital Library Foundation steht eine geschäftsfähige Organisation als Trägerin der Europäischen Digitalen Bibliothek zur Verfügung. Sie fungiert zunächst als Steuerungsgremium für das EU-finanzierte Projekt EDLnet und übernimmt sukzessive die Aufgaben, die für den Aufbau und die Weiterentwicklung der Europäischen Digitalen Bibliothek notwendig sind. Die Gründungsmitglieder sind zehn europäische Dachorganisationen aus den Bereichen Bibliothek, Archiv, audiovisuelle Sammlungen und Museen. Vorstandsmitglieder sind die Vorsitzende Elisabeth Niggemann (CENL) die Vize-Vorsitzende Martine de Boisdeffre (EURBICA), der Schatzmeister Edwin van Huis (FIAT) sowie Wim van Drimmelen, der Generaldirektor der Koninklijke Bibliotheek, der Nationalbibliothek der Niederlande, welche die Europäische Digitale Bibliothek hostet. Der Prototyp für die Europäische Digitale Bibliothek wird im Rahmen des EDLnet-Projekts entwickelt. Die erste Version des Prototyps wurde auf der internationalen Konferenz »One more step towards the European Digital Library« vorgestellt, die am 31. Januar und 1. Februar 2008 in der Deutschen Nationalbibliothek (DNB) in Frankfurt am Main stattfand. Die endgültige Version des Prototyps wird im November 2008 von der EU-Kommissarin für Informationsgesellschaft und Medien, Viviane Reding, in Paris vorgestellt werden. Dieser Prototyp wird direkten Zugang zu mindestens zwei Mio. digitalisierten Büchern, Fotografien, Karten, Tonaufzeichnungen, Filmaufnahmen und Archivalien aus Bibliotheken, Archiven, audiovisuellen Sammlungen und Museen Europas bieten.
    Content
    Darin u.a. "Interoperabilität als Kernstück - Technische und semantische Interoperabilität bilden somit das Kernstück für das Funktionieren der Europäischen Digitalen Bibliothek. Doch bevor Wege gefunden werden können, wie etwas funktionieren kann, muss zunächst einmal festgelegt werden, was funktionieren soll. Hierfür sind die Nutzeranforderungen das Maß aller Dinge, weshalb sich ein ganzes Arbeitspaket in EDLnet mit der Nutzersicht, den Nutzeranforderungen und der Nutzbarkeit der Europäischen Digitalen Bibliothek befasst, Anforderungen formuliert und diese im Arbeitspaket »Interoperabilität« umgesetzt werden. Für die Entscheidung, welche Inhalte wie präsentiert werden, sind jedoch nicht allein technische und semantische Fragestellungen zu klären, sondern auch ein Geschäftsmodell zu entwickeln, das festlegt, was die beteiligten Institutionen und Organisationen in welcher Form zu welchen Bedingungen zur Europäischen Digitalen Bibliothek beitragen. Auch das Geschäftsmodell wird Auswirkungen auf technische und semantische Interoperabilität haben und liefert die daraus abgeleiteten Anforderungen zur Umsetzung an das entsprechende Arbeitspaket. Im EDLnet-Projekt ist somit ein ständiger Arbeitskreislauf installiert, in welchem die Anforderungen an die Europäische Digitale Bibliothek formuliert, an das Interoperabilitäts-Arbeitspaket weitergegeben und dort umgesetzt werden. Diese Lösung wird wiederum an die Arbeitspakete »Nutzersicht« und »Geschäftsmodell« zurückgemeldet, getestet, kommentiert und für die Kommentare wiederum technische Lösungen gesucht. Dies ist eine Form des »rapid prototyping«, das hier zur Anwendung kommt, d. h. die Funktionalitäten werden schrittweise gemäß des Feedbacks der zukünftigen Nutzer sowie der Projektpartner erweitert und gleichzeitig wird der Prototyp stets lauffähig gehalten und bis zur Produktreife weiterentwickelt. Hierdurch verspricht man sich ein schnelles Ergebnis bei geringem Risiko einer Fehlentwicklung durch das ständige Feedback."
    Theme
    Information Gateway
  2. Mayr, P.: Information Retrieval-Mehrwertdienste für Digitale Bibliotheken: : Crosskonkordanzen und Bradfordizing (2010) 0.04
    0.037007116 = product of:
      0.07401423 = sum of:
        0.061649315 = weight(_text_:digitale in 4910) [ClassicSimilarity], result of:
          0.061649315 = score(doc=4910,freq=2.0), product of:
            0.18027179 = queryWeight, product of:
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.034944877 = queryNorm
            0.34197983 = fieldWeight in 4910, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.046875 = fieldNorm(doc=4910)
        0.012364916 = weight(_text_:information in 4910) [ClassicSimilarity], result of:
          0.012364916 = score(doc=4910,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.20156369 = fieldWeight in 4910, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4910)
      0.5 = coord(2/4)
    
    RSWK
    Dokumentationssprache / Heterogenität / Information Retrieval / Ranking / Evaluation
    Subject
    Dokumentationssprache / Heterogenität / Information Retrieval / Ranking / Evaluation
  3. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.029893843 = product of:
      0.059787687 = sum of:
        0.051374428 = weight(_text_:digitale in 1000) [ClassicSimilarity], result of:
          0.051374428 = score(doc=1000,freq=2.0), product of:
            0.18027179 = queryWeight, product of:
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.034944877 = queryNorm
            0.2849832 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.008413259 = weight(_text_:information in 1000) [ClassicSimilarity], result of:
          0.008413259 = score(doc=1000,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(2/4)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  4. Stempfhuber, M.; Zapilko, M.B.: ¬Ein Ebenenmodell für die semantische Integration von Primärdaten und Publikationen in Digitalen Bibliotheken (2013) 0.03
    0.02866175 = product of:
      0.0573235 = sum of:
        0.051374428 = weight(_text_:digitale in 917) [ClassicSimilarity], result of:
          0.051374428 = score(doc=917,freq=2.0), product of:
            0.18027179 = queryWeight, product of:
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.034944877 = queryNorm
            0.2849832 = fieldWeight in 917, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.0390625 = fieldNorm(doc=917)
        0.0059490725 = weight(_text_:information in 917) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=917,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 917, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=917)
      0.5 = coord(2/4)
    
    Abstract
    Digitale Bibliotheken stehen derzeit vor der Herausforderung, den veränderten Informationsbedürfnissen ihrer wissenschaftlichen Nutzer nachzukommen und einen integrierten Zugriff auf verschiedene Informationsarten (z.B. Publikationen, Primärdaten, Wissenschaftler- und. Organisationsprofile, Forschungsprojektinformationen) zu bieten, die in zunehmenden Maße digital zur Verfügung stehen und diese in virtuellen Forschungsumgebungen verfügbar zu machen. Die daraus resultierende Herausforderungen struktureller und semantischer Heterogenität werden durch ein weites Feld von verschiedenen Metadaten-Standards, Inhaltserschließungsverfahren sowie Indexierungsansätze für verschiedene Arten von Information getragen. Bisher existiert jedoch kein allgemeingültiges, integrierendes Modell für Organisation und Retrieval von Wissen in Digitalen Bibliotheken. Dieser Beitrag stellt aktuelle Forschungsentwicklungen und -aktivitäten vor, die die Problematik der semantischen Interoperabilität in Digitalen Bibliotheken behandeln und präsentiert ein Modell für eine integrierte Suche in textuellen Daten (z.B. Publikationen) und Faktendaten (z.B. Primärdaten), das verschiedene Ansätze der aktuellen Forschung aufgreift und miteinander in Bezug setzt. Eingebettet in den Forschungszyklus treffen dabei traditionelle Inhaltserschließungsverfahren für Publikationen auf neuere ontologie-basierte Ansätze, die für die Repräsentation komplexerer Informationen und Zusammenhänge (z.B. in sozialwissenschaftlichen Umfragedaten) geeigneter scheinen. Die Vorteile des Modells sind (1) die einfache Wiederverwendbarkeit bestehender Wissensorganisationssysteme sowie (2) ein geringer Aufwand bei der Konzeptmodellierung durch Ontologien.
  5. Fiala, S.: Deutscher Bibliothekartag Leipzig 2007 : Sacherschließung - Informationsdienstleistung nach Mass (2007) 0.02
    0.017936306 = product of:
      0.035872612 = sum of:
        0.030824658 = weight(_text_:digitale in 415) [ClassicSimilarity], result of:
          0.030824658 = score(doc=415,freq=2.0), product of:
            0.18027179 = queryWeight, product of:
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.034944877 = queryNorm
            0.17098992 = fieldWeight in 415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.158747 = idf(docFreq=690, maxDocs=44218)
              0.0234375 = fieldNorm(doc=415)
        0.0050479556 = weight(_text_:information in 415) [ClassicSimilarity], result of:
          0.0050479556 = score(doc=415,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.08228803 = fieldWeight in 415, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=415)
      0.5 = coord(2/4)
    
    Content
    ""Sacherschließung - Informationsdienstleistung nach Maß": unter diesem Titel fand am 3. Leipziger Kongress für Information und Bibliothek ("Information und Ethik") eine sehr aufschlussreiche Vortragsreihe statt. Neue Projekte der Vernetzung unterschiedlichst erschlossener Bestände wurden vorgestellt. Auch die Frage, inwieweit man die Nutzerinnen und Nutzer in die Erschließung einbinden kann, wurde behandelt. Die Arbeit der Bibliothekare kann wertvolle Ausgangssituationen für alternative Methoden bieten. Das Zusammenwirken von intellektueller und maschineller Erschließung wird in Zukunft eine große Rolle spielen. Ein Ausweg, um die Erschließung der ständig wachsenden Informationsquellen zu ermöglichen, könnte eine arbeitsteilige Erschließung und eine Kooperation mit anderen Informationseinrichtungen darstellen. Im Mittelpunkt all dieser Überlegungen standen die Heterogenitätsprobleme, die sich durch unterschiedliche Erschließungsregeln, verschiedene Arbeitsinstrumente, verschiedene Sprachen und durch die unterschiedliche Bedeutung der Begriffe ergeben können. Der Nachmittag begann mit einem konkreten Beispiel: "Zum Stand der Heterogenitätsbehandlung in vascoda" (Philipp Mayr, Bonn und Anne-Kathrin Walter, Bonn). Das Wissenschaftsportal vascoda beinhaltet verschiedene Fachportale, und es kann entweder interdisziplinär oder fachspezifisch gesucht werden. Durch die verschiedenen Informationsangebote, die in einem Fachportal vorhanden sind und die in dem Wissenschaftsportal vascoda zusammengefasst sind, entsteht semantische Heterogenität. Oberstes Ziel ist somit die Heterogenitätsbehandlung. Die Erstellung von Crosskonkordanzen (zwischen Indexierungssprachen innerhalb eines Fachgebiets und zwischen Indexierungssprachen unterschiedlicher Fachgebiete) und dem sogenannten Heterogenitätsservice (Term-Umschlüsselungs-Dienst) wurden anhand dieses Wissenschaftsportals vorgestellt. "Crosskonkordanzen sind gerichtete, relevanzbewertete Relationen zwischen Termen zweier Thesauri, Klassifikationen oder auch anderer kontrollierter Vokabulare." Im Heterogenitätsservice soll die Suchanfrage so transformiert werden, dass sie alle relevanten Dokumente in den verschiedenen Datenbanken erreicht. Bei der Evaluierung der Crosskonkordanzen stellt sich die Frage der Zielgenauigkeit der Relationen, sowie die Frage nach der Relevanz der durch die Crosskonkordanz zusätzlich gefundenen Treffer. Drei Schritte der Evaluation werden durchgeführt: Zum einen mit natürlicher Sprache in der Freitextsuche, dann übersetzt in Deskriptoren in der Schlagwortsuche und zuletzt mit Deskriptoren in der Schlagwortsuche mit Einsatz der Crosskonkordanzen. Im Laufe des Sommers werden erste Ergebnisse der Evaluation der Crosskonkordanzen erwartet.
    Der dritte Vortrag dieser Vortragsreihe mit dem Titel: "Anfragetransfers zur Integration von Internetquellen in digitalen Bibliotheken auf der Grundlage statistischer Termrelationen" (Robert Strötgen, Hildesheim) zeigte eine maschinelle Methode der Integration ausgewählter, inhaltlich aber nicht erschlossener Internetdokumentbestände in digitale Bibliotheken. Das Zusammentreffen inhaltlich gut erschlossener Fachdatenbanken mit Internetdokumenten steht im Mittelpunkt dieses Forschungsprojekts. "Sollen ausgewählte fachliche Internetdokumente zur Ausweitung einer Recherche in einer digitalen Bibliothek integriert werden, ist dies entweder durch eine Beschränkung auf hochwertige und aufwändig erstellte Clearinghouses oder durch eine "naive" Weiterleitung der Benutzeranfrage möglich." Weiter heißt es in der Projektbeschreibung: "Unter Anwendung von Methoden des maschinellen Lernens werden semantische Relationen zwischen Klassen verschiedener Ontologien erstellt, die Übergänge zwischen diesen Ontologien ermöglichen. Besondere Bedeutung für dieses Forschungsvorhaben hat der Transfer zwischen Ontologien und Freitexttermen." Ausgehend vom Projekt CARMEN werden in diesem Projekt automatisiert - durch statistisches maschinelles Lernen - semantische Relationen berechnet. So wird eine Benutzeranfrage, die mittels Thesaurus erfolgte, für eine Abfrage in Internetdokumentbeständen transformiert.
  6. Chen, H.: Semantic research for digital libraries (1999) 0.01
    0.0053541656 = product of:
      0.021416662 = sum of:
        0.021416662 = weight(_text_:information in 1247) [ClassicSimilarity], result of:
          0.021416662 = score(doc=1247,freq=18.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.34911853 = fieldWeight in 1247, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1247)
      0.25 = coord(1/4)
    
    Abstract
    In this era of the Internet and distributed, multimedia computing, new and emerging classes of information systems applications have swept into the lives of office workers and people in general. From digital libraries, multimedia systems, geographic information systems, and collaborative computing to electronic commerce, virtual reality, and electronic video arts and games, these applications have created tremendous opportunities for information and computer science researchers and practitioners. As applications become more pervasive, pressing, and diverse, several well-known information retrieval (IR) problems have become even more urgent. Information overload, a result of the ease of information creation and transmission via the Internet and WWW, has become more troublesome (e.g., even stockbrokers and elementary school students, heavily exposed to various WWW search engines, are versed in such IR terminology as recall and precision). Significant variations in database formats and structures, the richness of information media (text, audio, and video), and an abundance of multilingual information content also have created severe information interoperability problems -- structural interoperability, media interoperability, and multilingual interoperability.
  7. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.00
    0.0042066295 = product of:
      0.016826518 = sum of:
        0.016826518 = weight(_text_:information in 6061) [ClassicSimilarity], result of:
          0.016826518 = score(doc=6061,freq=16.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27429342 = fieldWeight in 6061, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
      0.25 = coord(1/4)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
    Source
    Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
  8. Prongué, N.; Schneider, R.: Modelling library linked data in practice (2015) 0.00
    0.0042066295 = product of:
      0.016826518 = sum of:
        0.016826518 = weight(_text_:information in 2985) [ClassicSimilarity], result of:
          0.016826518 = score(doc=2985,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27429342 = fieldWeight in 2985, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=2985)
      0.25 = coord(1/4)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  9. Nicholson, D.; Neill, S.: Interoperability in subject terminologies : the HILT project (2001) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 4138) [ClassicSimilarity], result of:
          0.016657405 = score(doc=4138,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 4138, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4138)
      0.25 = coord(1/4)
    
    Source
    New review of information networking. 7(2001) no.xx, S.147-157
  10. Hubrich, J.; Mengel, T.; Müller, K.; Jacobs, J.-H.: Improving subject access in global information spaces : reflections upon internationalization and localization of Knowledge Organization Systems (KOS) (2008) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 2190) [ClassicSimilarity], result of:
          0.016657405 = score(doc=2190,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 2190, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2190)
      0.25 = coord(1/4)
    
    Abstract
    With the establishment of global information spaces that are characterized by heterogeneity new kinds of knowledge organization systems (KOS) are needed to facilitate efficient subject access to available information resources. KOS need not to be built bottom-up. Internationalization and localization of common KOS enable making use of all different kinds of existing data from subject indexing for retrieval purposes and help creating a user-friendly tool that supports cross-national query modification and hermeneutic processes of information seeking as well as precise topical queries.
  11. Sieglerschmidt, J.: Convergence of internet services in the cultural heritage sector : the long way to common vocabularies, metadata formats, ontologies (2008) 0.00
    0.0039907596 = product of:
      0.015963038 = sum of:
        0.015963038 = weight(_text_:information in 1686) [ClassicSimilarity], result of:
          0.015963038 = score(doc=1686,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2602176 = fieldWeight in 1686, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1686)
      0.25 = coord(1/4)
    
    Abstract
    Since several years it has been observed that information offered by different knowledge producing institutions on the internet is more and more interlinked. This tendency will increase, because the fragmented information offers on the internet make the retrieval of information difficult as even impossible. At the same time the quantity of information offered on the internet grows exponentially in Europe - and elsewhere - due to many digitization projects. Insofar as funding institutions base the acceptance of projects on the observation of certain documentation standards the knowledge created will be retrievable and will remain so for a longer time. Otherwise the retrieval of information will become a matter of chance due to the limits of fragmented, knowledge producing social groups.
  12. Stempfhuber, M.; Zapilko, B.: Modelling text-fact-integration in digital libraries (2009) 0.00
    0.0039907596 = product of:
      0.015963038 = sum of:
        0.015963038 = weight(_text_:information in 3393) [ClassicSimilarity], result of:
          0.015963038 = score(doc=3393,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2602176 = fieldWeight in 3393, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3393)
      0.25 = coord(1/4)
    
    Abstract
    Digital Libraries currently face the challenge of integrating many different types of research information (e.g. publications, primary data, expert's profiles, institutional profiles, project information etc.) according to their scientific users' needs. To date no general, integrated model for knowledge organization and retrieval in Digital Libraries exists. This causes the problem of structural and semantic heterogeneity due to the wide range of metadata standards, indexing vocabularies and indexing approaches used for different types of information. The research presented in this paper focuses on areas in which activities are being undertaken in the field of Digital Libraries in order to treat semantic interoperability problems. We present a model for the integrated retrieval of factual and textual data which combines multiple approaches to semantic interoperability und sets them into context. Embedded in the research cycle, traditional content indexing methods for publications meet the newer, but rarely used ontology-based approaches which seem to be better suited for representing complex information like the one contained in survey data. The benefits of our model are (1) easy re-use of available knowledge organisation systems and (2) reduced efforts for domain modelling with ontologies.
    Theme
    Information Gateway
  13. Metadata and semantics research : 9th Research Conference, MTSR 2015, Manchester, UK, September 9-11, 2015, Proceedings (2015) 0.00
    0.0039907596 = product of:
      0.015963038 = sum of:
        0.015963038 = weight(_text_:information in 3274) [ClassicSimilarity], result of:
          0.015963038 = score(doc=3274,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2602176 = fieldWeight in 3274, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3274)
      0.25 = coord(1/4)
    
    Content
    The papers are organized in several sessions and tracks: general track on ontology evolution, engineering, and frameworks, semantic Web and metadata extraction, modelling, interoperability and exploratory search, data analysis, reuse and visualization; track on digital libraries, information retrieval, linked and social data; track on metadata and semantics for open repositories, research information systems and data infrastructure; track on metadata and semantics for agriculture, food and environment; track on metadata and semantics for cultural collections and applications; track on European and national projects.
    LCSH
    Information storage and retrieval systems
    Series
    Communications in computer and information science; 544
    Subject
    Information storage and retrieval systems
  14. Nicholson, D.: Help us make HILT's terminology services useful in your information service (2008) 0.00
    0.0036430482 = product of:
      0.014572193 = sum of:
        0.014572193 = weight(_text_:information in 3654) [ClassicSimilarity], result of:
          0.014572193 = score(doc=3654,freq=12.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23754507 = fieldWeight in 3654, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3654)
      0.25 = coord(1/4)
    
    Abstract
    The JISC-funded HILT project is looking to make contact with staff in information services or projects interested in helping it test and refine its developing terminology services. The project is currently working to create pilot web services that will deliver machine-readable terminology and cross-terminology mappings data likely to be useful to information services wishing to extend or enhance the efficacy of their subject search or browse services. Based on SRW/U, SOAP, and SKOS, the HILT facilities, when fully operational, will permit such services to improve their own subject search and browse mechanisms by using HILT data in a fashion transparent to their users. On request, HILT will serve up machine-processable data on individual subject schemes (broader terms, narrower terms, hierarchy information, preferred and non-preferred terms, and so on) and interoperability data (usually intellectual or automated mappings between schemes, but the architecture allows for the use of other methods) - data that can be used to enhance user services. The project is also developing an associated toolkit that will help service technical staff to embed HILT-related functionality into their services. The primary aim is to serve JISC funded information services or services at JISC institutions, but information services outside the JISC domain may also find the proposed services useful and wish to participate in the test and refine process.
  15. Coen, G.; Smiraglia, R.P.: Toward better interoperability of the NARCIS classification (2019) 0.00
    0.0036430482 = product of:
      0.014572193 = sum of:
        0.014572193 = weight(_text_:information in 5399) [ClassicSimilarity], result of:
          0.014572193 = score(doc=5399,freq=12.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23754507 = fieldWeight in 5399, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5399)
      0.25 = coord(1/4)
    
    Abstract
    Research information can be useful to science stake-holders for discovering, evaluating and planning research activities. In the Netherlands, the institute tasked with the stewardship of national research information is DANS (Data Archiving and Networked Services). DANS is the home of NARCIS, the national portal for research information, which uses a similarly named national research classification. The NARCIS Classification assigns symbols to represent the knowledge bases of contributing scholars. A recent research stream in knowledge organization known as comparative classification uses two or more classifications experimentally to generate empirical evidence about coverage of conceptual content, population of the classes, and economy of classification. This paper builds on that research in order to further understand the comparative impact of the NARCIS Classification alongside a classification designed specifically for information resources. Our six cases come from the DANS project Knowledge Organization System Observatory (KOSo), which itself is classified using the Information Coding Classification (ICC) created in 1982 by Ingetraut Dahlberg. ICC is considered to have the merits of universality, faceting, and a top-down approach. Results are exploratory, indicating that both classifications provide fairly precise coverage. The inflexibility of the NARCIS Classification makes it difficult to express complex concepts. The meta-ontological, epistemic stance of the ICC is apparent in all aspects of this study. Using the two together in the DANS KOS Observatory will provide users with both clarity of scientific positioning and ontological relativity.
    Footnote
    Beitrag eines Special Issue: Research Information Systems and Science Classifications; including papers from "Trajectories for Research: Fathoming the Promise of the NARCIS Classification," 27-28 September 2018, The Hague, The Netherlands.
  16. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.00
    0.003606434 = product of:
      0.014425736 = sum of:
        0.014425736 = weight(_text_:information in 3283) [ClassicSimilarity], result of:
          0.014425736 = score(doc=3283,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23515764 = fieldWeight in 3283, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
      0.25 = coord(1/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th Metadata and Semantics Research Conference, MTSR 2016, held in Göttingen, Germany, in November 2016. The 26 full papers and 6 short papers presented were carefully reviewed and selected from 67 submissions. The papers are organized in several sessions and tracks: Digital Libraries, Information Retrieval, Linked and Social Data, Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures, Metadata and Semantics for Agriculture, Food and Environment, Metadata and Semantics for Cultural Collections and Applications, European and National Projects.
    Series
    Communications in computer and information science; 672
  17. Krause, J.: Polyzentrische Informationsversorgung in einer dezentralisierten Informationswelt (1998) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 4359) [ClassicSimilarity], result of:
          0.014277775 = score(doc=4359,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 4359, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4359)
      0.25 = coord(1/4)
    
    Source
    nfd Information - Wissenschaft und Praxis. 49(1998) H.6, S.345-351
  18. Li, K.W.; Yang, C.C.: Automatic crosslingual thesaurus generated from the Hong Kong SAR Police Department Web Corpus for Crime Analysis (2005) 0.00
    0.0035694435 = product of:
      0.014277774 = sum of:
        0.014277774 = weight(_text_:information in 3391) [ClassicSimilarity], result of:
          0.014277774 = score(doc=3391,freq=18.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274568 = fieldWeight in 3391, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3391)
      0.25 = coord(1/4)
    
    Abstract
    For the sake of national security, very large volumes of data and information are generated and gathered daily. Much of this data and information is written in different languages, stored in different locations, and may be seemingly unconnected. Crosslingual semantic interoperability is a major challenge to generate an overview of this disparate data and information so that it can be analyzed, shared, searched, and summarized. The recent terrorist attacks and the tragic events of September 11, 2001 have prompted increased attention an national security and criminal analysis. Many Asian countries and cities, such as Japan, Taiwan, and Singapore, have been advised that they may become the next targets of terrorist attacks. Semantic interoperability has been a focus in digital library research. Traditional information retrieval (IR) approaches normally require a document to share some common keywords with the query. Generating the associations for the related terms between the two term spaces of users and documents is an important issue. The problem can be viewed as the creation of a thesaurus. Apart from this, terrorists and criminals may communicate through letters, e-mails, and faxes in languages other than English. The translation ambiguity significantly exacerbates the retrieval problem. The problem is expanded to crosslingual semantic interoperability. In this paper, we focus an the English/Chinese crosslingual semantic interoperability problem. However, the developed techniques are not limited to English and Chinese languages but can be applied to many other languages. English and Chinese are popular languages in the Asian region. Much information about national security or crime is communicated in these languages. An efficient automatically generated thesaurus between these languages is important to crosslingual information retrieval between English and Chinese languages. To facilitate crosslingual information retrieval, a corpus-based approach uses the term co-occurrence statistics in parallel or comparable corpora to construct a statistical translation model to cross the language boundary. In this paper, the text based approach to align English/Chinese Hong Kong Police press release documents from the Web is first presented. We also introduce an algorithmic approach to generate a robust knowledge base based an statistical correlation analysis of the semantics (knowledge) embedded in the bilingual press release corpus. The research output consisted of a thesaurus-like, semantic network knowledge base, which can aid in semanticsbased crosslingual information management and retrieval.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.3, S.272-281
  19. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 4820) [ClassicSimilarity], result of:
          0.014277775 = score(doc=4820,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 4820, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4820)
      0.25 = coord(1/4)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
  20. Smith, D.A.: Exploratory and faceted browsing over heterogeneous and cross-domain data sources. (2011) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 4839) [ClassicSimilarity], result of:
          0.014277775 = score(doc=4839,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 4839, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4839)
      0.25 = coord(1/4)
    
    Abstract
    Exploration of heterogeneous data sources increases the value of information by allowing users to answer questions through exploration across multiple sources; Users can use information that has been posted across the Web to answer questions and learn about new domains. We have conducted research that lowers the interrogation time of faceted data, by combining related information from different sources. The work contributes methodologies in combining heterogenous sources, and how to deliver that data to a user interface scalably, with enough performance to support rapid interrogation of the knowledge by the user. The work also contributes how to combine linked data sources so that users can create faceted browsers that target the information facets of their needs. The work is grounded and proven in a number of experiments and test cases that study the contributions in domain research work.

Authors

Years

Languages

  • e 130
  • d 26

Types

  • a 115
  • el 34
  • m 14
  • s 7
  • x 7
  • n 2
  • r 1
  • More… Less…