Search (46 results, page 1 of 3)

  • × type_ss:"x"
  1. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.09
    0.08653661 = sum of:
      0.054482006 = product of:
        0.16344601 = sum of:
          0.16344601 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
            0.16344601 = score(doc=5820,freq=2.0), product of:
              0.4362298 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05145426 = queryNorm
              0.3746787 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.33333334 = coord(1/3)
      0.032054603 = product of:
        0.064109206 = sum of:
          0.064109206 = weight(_text_:learning in 5820) [ClassicSimilarity], result of:
            0.064109206 = score(doc=5820,freq=4.0), product of:
              0.22973695 = queryWeight, product of:
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.05145426 = queryNorm
              0.27905482 = fieldWeight in 5820, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.464877 = idf(docFreq=1382, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.5 = coord(1/2)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  2. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.08
    0.081723005 = product of:
      0.16344601 = sum of:
        0.16344601 = product of:
          0.49033803 = sum of:
            0.49033803 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.49033803 = score(doc=973,freq=2.0), product of:
                0.4362298 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05145426 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  3. Kuhlwein, K.: E-Learning - ein Aufgabenfeld für Bibliothekare? : Eine Untersuchung ausgewählter Online-Schulungskonzepte an deutschen Bibliotheken (2004) 0.04
    0.03925871 = product of:
      0.07851742 = sum of:
        0.07851742 = product of:
          0.15703484 = sum of:
            0.15703484 = weight(_text_:learning in 3580) [ClassicSimilarity], result of:
              0.15703484 = score(doc=3580,freq=6.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.68354195 = fieldWeight in 3580, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3580)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Nach Einführung in die Thematik E-Learning und Informationskompetenz, werden Online-Schulungen für Benutzer von 16 wissenschaftlichen, deutschen Bibliotheken untersucht. Der Beschreibung des Ist-Zustandes und vergleichenden Bewertung schließt sich die Einschätzung an, ob sich E-Learning für bibliothekarische Vermittlungszwecke eignet.
  4. Lipokatic, R.: Vergleichende Usability-Evaluation der E-Learning Plattformen LUVIT und WebCT (2004) 0.04
    0.03925871 = product of:
      0.07851742 = sum of:
        0.07851742 = product of:
          0.15703484 = sum of:
            0.15703484 = weight(_text_:learning in 3702) [ClassicSimilarity], result of:
              0.15703484 = score(doc=3702,freq=6.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.68354195 = fieldWeight in 3702, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3702)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    E-Learning Plattformen für computerbasiertes Lernen müssen mit ihren über das Internet verfügbaren Kommunikationsmöglichkeiten Usability-Richtlinien entsprechen, um hohe Nutzerfreundlichkeit zu gewährleisten. Zwei ausgewählte E-Learning Plattformen werden verglichen mit dem Schwerpunkt auf Erklärung verschiedener Evaluationsmethoden und entwickelter Heuristik.
  5. Weidler, D.: Usability-Inspektionen und Usability-Tests : Komplementäre Methoden für die Evaluation eines E-Learning-Moduls (2004) 0.03
    0.034351375 = product of:
      0.06870275 = sum of:
        0.06870275 = product of:
          0.1374055 = sum of:
            0.1374055 = weight(_text_:learning in 3706) [ClassicSimilarity], result of:
              0.1374055 = score(doc=3706,freq=6.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.59809923 = fieldWeight in 3706, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3706)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Benutzer von E-Learning-Produkten sind einer Doppelbeiastung ausgesetzt: Aufnehmen von Lernstoff und Erlernen des Umgangs mit Programmen. Der Gefahr, Benutzer einer kognitiven Überlastung auszusetzen, kann eine gute Usability entgegenwirken. Es werden Benutzerfreundlichkeit von Teilen eines E-Learning-Moduls »Automatische Inhaltserschließung« untersucht mit analytischen Inspektionsmethoden und empirischen Usability-Tests. Da der Designprozeß nicht iterativ gestaltet wurde, dienen die Ergebnisse aus den Inspektionen zur Vorbereitung der Tests. Die Testergebnisse werden genutzt, um Erkenntnisse der expertenorientierten Methoden zu verifizieren. Es wird analysiert, ob die Inspektionsmethoden Probleme aufdecken, die auch von den Teilnehmern an den Usability-Tests als solche empfunden wurden. Zur praktischen Umsetzung der Lösungsansätze werden Usability-Reports verfaßt und ein Storyboard konzipiert.
  6. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.03
    0.034051254 = product of:
      0.06810251 = sum of:
        0.06810251 = product of:
          0.20430753 = sum of:
            0.20430753 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.20430753 = score(doc=4997,freq=2.0), product of:
                0.4362298 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05145426 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  7. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.03
    0.034051254 = product of:
      0.06810251 = sum of:
        0.06810251 = product of:
          0.20430753 = sum of:
            0.20430753 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.20430753 = score(doc=4388,freq=2.0), product of:
                0.4362298 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05145426 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  8. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.03
    0.034051254 = product of:
      0.06810251 = sum of:
        0.06810251 = product of:
          0.20430753 = sum of:
            0.20430753 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.20430753 = score(doc=855,freq=2.0), product of:
                0.4362298 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05145426 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  9. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.034051254 = product of:
      0.06810251 = sum of:
        0.06810251 = product of:
          0.20430753 = sum of:
            0.20430753 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.20430753 = score(doc=1000,freq=2.0), product of:
                0.4362298 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05145426 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  10. Stünkel, M.: Neuere Methoden der inhaltlichen Erschließung schöner Literatur in öffentlichen Bibliotheken (1986) 0.03
    0.027885368 = product of:
      0.055770736 = sum of:
        0.055770736 = product of:
          0.11154147 = sum of:
            0.11154147 = weight(_text_:22 in 5815) [ClassicSimilarity], result of:
              0.11154147 = score(doc=5815,freq=2.0), product of:
                0.18018405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05145426 = queryNorm
                0.61904186 = fieldWeight in 5815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5815)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 8.2006 21:35:22
  11. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.027241003 = product of:
      0.054482006 = sum of:
        0.054482006 = product of:
          0.16344601 = sum of:
            0.16344601 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.16344601 = score(doc=701,freq=2.0), product of:
                0.4362298 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05145426 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  12. Nagy T., I.: Detecting multiword expressions and named entities in natural language texts (2014) 0.03
    0.026236294 = product of:
      0.052472588 = sum of:
        0.052472588 = product of:
          0.104945175 = sum of:
            0.104945175 = weight(_text_:learning in 1536) [ClassicSimilarity], result of:
              0.104945175 = score(doc=1536,freq=14.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.45680583 = fieldWeight in 1536, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1536)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Multiword expressions (MWEs) are lexical items that can be decomposed into single words and display lexical, syntactic, semantic, pragmatic and/or statistical idiosyncrasy (Sag et al., 2002; Kim, 2008; Calzolari et al., 2002). The proper treatment of multiword expressions such as rock 'n' roll and make a decision is essential for many natural language processing (NLP) applications like information extraction and retrieval, terminology extraction and machine translation, and it is important to identify multiword expressions in context. For example, in machine translation we must know that MWEs form one semantic unit, hence their parts should not be translated separately. For this, multiword expressions should be identified first in the text to be translated. The chief aim of this thesis is to develop machine learning-based approaches for the automatic detection of different types of multiword expressions in English and Hungarian natural language texts. In our investigations, we pay attention to the characteristics of different types of multiword expressions such as nominal compounds, multiword named entities and light verb constructions, and we apply novel methods to identify MWEs in raw texts. In the thesis it will be demonstrated that nominal compounds and multiword amed entities may require a similar approach for their automatic detection as they behave in the same way from a linguistic point of view. Furthermore, it will be shown that the automatic detection of light verb constructions can be carried out using two effective machine learning-based approaches.
    In this thesis, we focused on the automatic detection of multiword expressions in natural language texts. On the basis of the main contributions, we can argue that: - Supervised machine learning methods can be successfully applied for the automatic detection of different types of multiword expressions in natural language texts. - Machine learning-based multiword expression detection can be successfully carried out for English as well as for Hungarian. - Our supervised machine learning-based model was successfully applied to the automatic detection of nominal compounds from English raw texts. - We developed a Wikipedia-based dictionary labeling method to automatically detect English nominal compounds. - A prior knowledge of nominal compounds can enhance Named Entity Recognition, while previously identified named entities can assist the nominal compound identification process. - The machine learning-based method can also provide acceptable results when it was trained on an automatically generated silver standard corpus. - As named entities form one semantic unit and may consist of more than one word and function as a noun, we can treat them in a similar way to nominal compounds. - Our sequence labelling-based tool can be successfully applied for identifying verbal light verb constructions in two typologically different languages, namely English and Hungarian. - Domain adaptation techniques may help diminish the distance between domains in the automatic detection of light verb constructions. - Our syntax-based method can be successfully applied for the full-coverage identification of light verb constructions. As a first step, a data-driven candidate extraction method can be utilized. After, a machine learning approach that makes use of an extended and rich feature set selects LVCs among extracted candidates. - When a precise syntactic parser is available for the actual domain, the full-coverage identification can be performed better. In other cases, the usage of the sequence labeling method is recommended.
  13. Menges, T.: Möglichkeiten und Grenzen der Übertragbarkeit eines Buches auf Hypertext am Beispiel einer französischen Grundgrammatik (Klein; Kleineidam) (1997) 0.02
    0.024399696 = product of:
      0.04879939 = sum of:
        0.04879939 = product of:
          0.09759878 = sum of:
            0.09759878 = weight(_text_:22 in 1496) [ClassicSimilarity], result of:
              0.09759878 = score(doc=1496,freq=2.0), product of:
                0.18018405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05145426 = queryNorm
                0.5416616 = fieldWeight in 1496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1496)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.1998 18:23:25
  14. Schneider, A.: ¬Die Verzeichnung und sachliche Erschließung der Belletristik in Kaysers Bücherlexikon und im Schlagwortkatalog Georg/Ost (1980) 0.02
    0.024399696 = product of:
      0.04879939 = sum of:
        0.04879939 = product of:
          0.09759878 = sum of:
            0.09759878 = weight(_text_:22 in 5309) [ClassicSimilarity], result of:
              0.09759878 = score(doc=5309,freq=2.0), product of:
                0.18018405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05145426 = queryNorm
                0.5416616 = fieldWeight in 5309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5309)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:07:22
  15. Sperling, R.: Anlage von Literaturreferenzen für Onlineressourcen auf einer virtuellen Lernplattform (2004) 0.02
    0.024399696 = product of:
      0.04879939 = sum of:
        0.04879939 = product of:
          0.09759878 = sum of:
            0.09759878 = weight(_text_:22 in 4635) [ClassicSimilarity], result of:
              0.09759878 = score(doc=4635,freq=2.0), product of:
                0.18018405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05145426 = queryNorm
                0.5416616 = fieldWeight in 4635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4635)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.11.2005 18:39:22
  16. Mao, M.: Ontology mapping : towards semantic interoperability in distributed and heterogeneous environments (2008) 0.02
    0.022666026 = product of:
      0.04533205 = sum of:
        0.04533205 = product of:
          0.0906641 = sum of:
            0.0906641 = weight(_text_:learning in 4659) [ClassicSimilarity], result of:
              0.0906641 = score(doc=4659,freq=8.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.3946431 = fieldWeight in 4659, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4659)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This dissertation studies ontology mapping: the problem of finding semantic correspondences between similar elements of different ontologies. In the dissertation, elements denote classes or properties of ontologies. The goal of this research is to use ontology mapping to make heterogeneous information more accessible. The World Wide Web (WWW) now is widely used as a universal medium for information exchange. Semantic interoperability among different information systems in the WWW is limited due to information heterogeneity, and the non semantic nature of HTML and URLs. Ontologies have been suggested as a way to solve the problem of information heterogeneity by providing formal, explicit definitions of data and reasoning ability over related concepts. Given that no universal ontology exists for the WWW, work has focused on finding semantic correspondences between similar elements of different ontologies, i.e., ontology mapping. Ontology mapping can be done either by hand or using automated tools. Manual mapping becomes impractical as the size and complexity of ontologies increases. Full or semi-automated mapping approaches have been examined by several research studies. Previous full or semiautomated mapping approaches include analyzing linguistic information of elements in ontologies, treating ontologies as structural graphs, applying heuristic rules and machine learning techniques, and using probabilistic and reasoning methods etc. In this paper, two generic ontology mapping approaches are proposed. One is the PRIOR+ approach, which utilizes both information retrieval and artificial intelligence techniques in the context of ontology mapping. The other is the non-instance learning based approach, which experimentally explores machine learning algorithms to solve ontology mapping problem without requesting any instance. The results of the PRIOR+ on different tests at OAEI ontology matching campaign 2007 are encouraging. The non-instance learning based approach has shown potential for solving ontology mapping problem on OAEI benchmark tests.
  17. Stanz, G.: Medienarchive: Analyse einer unterschätzten Ressource : Archivierung, Dokumentation, und Informationsvermittlung in Medien bei besonderer Berücksichtigung von Pressearchiven (1994) 0.02
    0.020914026 = product of:
      0.04182805 = sum of:
        0.04182805 = product of:
          0.0836561 = sum of:
            0.0836561 = weight(_text_:22 in 9) [ClassicSimilarity], result of:
              0.0836561 = score(doc=9,freq=2.0), product of:
                0.18018405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05145426 = queryNorm
                0.46428138 = fieldWeight in 9, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=9)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.1997 19:50:29
  18. Hartwieg, U.: ¬Die nationalbibliographische Situation im 18. Jahrhundert : Vorüberlegungen zur Verzeichnung der deutschen Drucke in einem VD18 (1999) 0.02
    0.020914026 = product of:
      0.04182805 = sum of:
        0.04182805 = product of:
          0.0836561 = sum of:
            0.0836561 = weight(_text_:22 in 3813) [ClassicSimilarity], result of:
              0.0836561 = score(doc=3813,freq=2.0), product of:
                0.18018405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05145426 = queryNorm
                0.46428138 = fieldWeight in 3813, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3813)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    18. 6.1999 9:22:36
  19. Milanesi, C.: Möglichkeiten der Kooperation im Rahmen von Subject Gateways : das Euler-Projekt im Vergleich mit weiteren europäischen Projekten (2001) 0.02
    0.020914026 = product of:
      0.04182805 = sum of:
        0.04182805 = product of:
          0.0836561 = sum of:
            0.0836561 = weight(_text_:22 in 4865) [ClassicSimilarity], result of:
              0.0836561 = score(doc=4865,freq=2.0), product of:
                0.18018405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05145426 = queryNorm
                0.46428138 = fieldWeight in 4865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4865)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:41:59
  20. Gordon, T.J.; Helmer-Hirschberg, O.: Report on a long-range forecasting study (1964) 0.02
    0.019717934 = product of:
      0.039435867 = sum of:
        0.039435867 = product of:
          0.078871734 = sum of:
            0.078871734 = weight(_text_:22 in 4204) [ClassicSimilarity], result of:
              0.078871734 = score(doc=4204,freq=4.0), product of:
                0.18018405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05145426 = queryNorm
                0.4377287 = fieldWeight in 4204, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4204)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2018 13:24:08
    22. 6.2018 13:54:52

Languages

  • d 32
  • e 13
  • hu 1
  • More… Less…