Search (51 results, page 1 of 3)

  • × type_ss:"x"
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.08
    0.082218945 = product of:
      0.16443789 = sum of:
        0.16443789 = product of:
          0.49331367 = sum of:
            0.49331367 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.49331367 = score(doc=973,freq=2.0), product of:
                0.43887708 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051766515 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.08
    0.07971269 = sum of:
      0.054812633 = product of:
        0.16443789 = sum of:
          0.16443789 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
            0.16443789 = score(doc=5820,freq=2.0), product of:
              0.43887708 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.051766515 = queryNorm
              0.3746787 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.33333334 = coord(1/3)
      0.02490006 = product of:
        0.04980012 = sum of:
          0.04980012 = weight(_text_:language in 5820) [ClassicSimilarity], result of:
            0.04980012 = score(doc=5820,freq=4.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.2452058 = fieldWeight in 5820, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.5 = coord(1/2)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
    Imprint
    Pittsburgh, PA : Carnegie Mellon University, School of Computer Science, Language Technologies Institute
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.05
    0.047451444 = product of:
      0.09490289 = sum of:
        0.09490289 = sum of:
          0.052821 = weight(_text_:language in 563) [ClassicSimilarity], result of:
            0.052821 = score(doc=563,freq=2.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.26008 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.04208189 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.04208189 = score(doc=563,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.5 = coord(1/2)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Date
    10. 1.2013 19:22:47
  4. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.03
    0.034257896 = product of:
      0.06851579 = sum of:
        0.06851579 = product of:
          0.20554736 = sum of:
            0.20554736 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.20554736 = score(doc=4997,freq=2.0), product of:
                0.43887708 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051766515 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  5. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.03
    0.034257896 = product of:
      0.06851579 = sum of:
        0.06851579 = product of:
          0.20554736 = sum of:
            0.20554736 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.20554736 = score(doc=4388,freq=2.0), product of:
                0.43887708 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051766515 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  6. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.03
    0.034257896 = product of:
      0.06851579 = sum of:
        0.06851579 = product of:
          0.20554736 = sum of:
            0.20554736 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.20554736 = score(doc=855,freq=2.0), product of:
                0.43887708 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051766515 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  7. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.034257896 = product of:
      0.06851579 = sum of:
        0.06851579 = product of:
          0.20554736 = sum of:
            0.20554736 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.20554736 = score(doc=1000,freq=2.0), product of:
                0.43887708 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051766515 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  8. Stünkel, M.: Neuere Methoden der inhaltlichen Erschließung schöner Literatur in öffentlichen Bibliotheken (1986) 0.03
    0.028054593 = product of:
      0.056109186 = sum of:
        0.056109186 = product of:
          0.11221837 = sum of:
            0.11221837 = weight(_text_:22 in 5815) [ClassicSimilarity], result of:
              0.11221837 = score(doc=5815,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.61904186 = fieldWeight in 5815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5815)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 8.2006 21:35:22
  9. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.027406316 = product of:
      0.054812633 = sum of:
        0.054812633 = product of:
          0.16443789 = sum of:
            0.16443789 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.16443789 = score(doc=701,freq=2.0), product of:
                0.43887708 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051766515 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  10. Menges, T.: Möglichkeiten und Grenzen der Übertragbarkeit eines Buches auf Hypertext am Beispiel einer französischen Grundgrammatik (Klein; Kleineidam) (1997) 0.02
    0.024547769 = product of:
      0.049095538 = sum of:
        0.049095538 = product of:
          0.098191075 = sum of:
            0.098191075 = weight(_text_:22 in 1496) [ClassicSimilarity], result of:
              0.098191075 = score(doc=1496,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.5416616 = fieldWeight in 1496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1496)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.1998 18:23:25
  11. Schneider, A.: ¬Die Verzeichnung und sachliche Erschließung der Belletristik in Kaysers Bücherlexikon und im Schlagwortkatalog Georg/Ost (1980) 0.02
    0.024547769 = product of:
      0.049095538 = sum of:
        0.049095538 = product of:
          0.098191075 = sum of:
            0.098191075 = weight(_text_:22 in 5309) [ClassicSimilarity], result of:
              0.098191075 = score(doc=5309,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.5416616 = fieldWeight in 5309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5309)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:07:22
  12. Sperling, R.: Anlage von Literaturreferenzen für Onlineressourcen auf einer virtuellen Lernplattform (2004) 0.02
    0.024547769 = product of:
      0.049095538 = sum of:
        0.049095538 = product of:
          0.098191075 = sum of:
            0.098191075 = weight(_text_:22 in 4635) [ClassicSimilarity], result of:
              0.098191075 = score(doc=4635,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.5416616 = fieldWeight in 4635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4635)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.11.2005 18:39:22
  13. Stanz, G.: Medienarchive: Analyse einer unterschätzten Ressource : Archivierung, Dokumentation, und Informationsvermittlung in Medien bei besonderer Berücksichtigung von Pressearchiven (1994) 0.02
    0.021040944 = product of:
      0.04208189 = sum of:
        0.04208189 = product of:
          0.08416378 = sum of:
            0.08416378 = weight(_text_:22 in 9) [ClassicSimilarity], result of:
              0.08416378 = score(doc=9,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.46428138 = fieldWeight in 9, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=9)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.1997 19:50:29
  14. Hartwieg, U.: ¬Die nationalbibliographische Situation im 18. Jahrhundert : Vorüberlegungen zur Verzeichnung der deutschen Drucke in einem VD18 (1999) 0.02
    0.021040944 = product of:
      0.04208189 = sum of:
        0.04208189 = product of:
          0.08416378 = sum of:
            0.08416378 = weight(_text_:22 in 3813) [ClassicSimilarity], result of:
              0.08416378 = score(doc=3813,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.46428138 = fieldWeight in 3813, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3813)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    18. 6.1999 9:22:36
  15. Milanesi, C.: Möglichkeiten der Kooperation im Rahmen von Subject Gateways : das Euler-Projekt im Vergleich mit weiteren europäischen Projekten (2001) 0.02
    0.021040944 = product of:
      0.04208189 = sum of:
        0.04208189 = product of:
          0.08416378 = sum of:
            0.08416378 = weight(_text_:22 in 4865) [ClassicSimilarity], result of:
              0.08416378 = score(doc=4865,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.46428138 = fieldWeight in 4865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4865)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:41:59
  16. Gordon, T.J.; Helmer-Hirschberg, O.: Report on a long-range forecasting study (1964) 0.02
    0.019837592 = product of:
      0.039675184 = sum of:
        0.039675184 = product of:
          0.07935037 = sum of:
            0.07935037 = weight(_text_:22 in 4204) [ClassicSimilarity], result of:
              0.07935037 = score(doc=4204,freq=4.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.4377287 = fieldWeight in 4204, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4204)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2018 13:24:08
    22. 6.2018 13:54:52
  17. Francu, V.: Multilingual access to information using an intermediate language (2003) 0.02
    0.019685227 = product of:
      0.039370455 = sum of:
        0.039370455 = product of:
          0.07874091 = sum of:
            0.07874091 = weight(_text_:language in 1742) [ClassicSimilarity], result of:
              0.07874091 = score(doc=1742,freq=10.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.38770443 = fieldWeight in 1742, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1742)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    While being theoretically so widely available, information can be restricted from a more general use by linguistic barriers. The linguistic aspects of the information languages and particularly the chances of an enhanced access to information by means of multilingual access facilities will make the substance of this thesis. The main problem of this research is thus to demonstrate that information retrieval can be improved by using multilingual thesaurus terms based on an intermediate or switching language to search with. Universal classification systems in general can play the role of switching languages for reasons dealt with in the forthcoming pages. The Universal Decimal Classification (UDC) in particular is the classification system used as example of a switching language for our objectives. The question may arise: why a universal classification system and not another thesaurus? Because the UDC like most of the classification systems uses symbols. Therefore, it is language independent and the problems of compatibility between such a thesaurus and different other thesauri in different languages are avoided. Another question may still arise? Why not then, assign running numbers to the descriptors in a thesaurus and make a switching language out of the resulting enumerative system? Because of some other characteristics of the UDC: hierarchical structure and terminological richness, consistency and control. One big problem to find an answer to is: can a thesaurus be made having as a basis a classification system in any and all its parts? To what extent this question can be given an affirmative answer? This depends much on the attributes of the universal classification system which can be favourably used to this purpose. Examples of different situations will be given and discussed upon beginning with those classes of UDC which are best fitted for building a thesaurus structure out of them (classes which are both hierarchical and faceted)...
  18. Temath, C.: Prototypische Implementierung der "Topic Map Query Language"-Abfragesprache für die Groupware-basierte Topic Map Engine (2005) 0.02
    0.01906014 = product of:
      0.03812028 = sum of:
        0.03812028 = product of:
          0.07624056 = sum of:
            0.07624056 = weight(_text_:language in 200) [ClassicSimilarity], result of:
              0.07624056 = score(doc=200,freq=6.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.3753932 = fieldWeight in 200, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=200)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Die folgende Dokumentation beschäftigt sich mit den Ergebnissen der Seminararbeit zum Thema "Prototypische Implementierung der "Topic Map Query Language"-Abfragesprache für die Groupware-basierte Topic Map Engine", die im Rahmen des Seminars Wirtschaftsinformatik II am Groupware Competence Center entstanden ist. Im Rahmen des Dissertationsprojektes "K-Discovery" von Stefan Smolnik am Groupware Competence Center entstand der Prototyp einer Groupware-basierten Topic Map Engine. Diese Umgebung stellt verschiedene Werkzeuge zur Modellierung, Erstellung und Visualisierung von Topic Maps in einem Groupware-basierten Umfeld zur Verfügung. So reichen die vorhandenen Werkzeuge von einem grafischen Modellierungswerkzeug für die Erstellung von Topic Maps, bis hin zu Suchwerkzeugen, die grafisch oder textbasiert die Suche nach Informationen erleichtern. Zusätzlich existiert eine Exportschnittstelle, die es ermöglicht, die Daten der erzeugten Topic Map in ein standardisiertes XML-Format, dem XML Topic Maps (XTM) Format, zu exportieren. Dies stellt eine erste, rudimentäre Schnittstelle zum Abfragen von Topic Map Informationen für die Groupwarebasierte Topic Map Engine (GTME) dar. Im Rahmen internationaler Standardisierungsbemühungen wird zurzeit an einem Abfragestandard für Topic Maps gearbeitet, der so genannten "Topic Map Query Language (TMQL)"-Abfragesprache. Ziel dieser Arbeit ist es nun, einen Überblick über den aktuellen Stand des Standardisierungsprozesses für die TMQL-Abfragesprache aufzuzeigen und basierend auf den im Standardisierungsprozess bisher erarbeiteten Ergebnissen eine prototypische Implementierung für die Groupware-basierte Topic Map Engine zu erstellen. Das Ziel ist demnach eine standardisierte Schnittstelle zum Abfragen von Topic Map Daten zu schaffen, um die Groupware-basierte Topic Map Engine einem neuen Anwendungsspektrum zugänglich zu machen.
  19. Scherer Auberson, K.: Counteracting concept drift in natural language classifiers : proposal for an automated method (2018) 0.02
    0.018675046 = product of:
      0.037350092 = sum of:
        0.037350092 = product of:
          0.074700184 = sum of:
            0.074700184 = weight(_text_:language in 2849) [ClassicSimilarity], result of:
              0.074700184 = score(doc=2849,freq=4.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.3678087 = fieldWeight in 2849, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2849)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Natural Language Classifier helfen Unternehmen zunehmend dabei die Flut von Textdaten zu überwinden. Aber diese Classifier, einmal trainiert, verlieren mit der Zeit ihre Nützlichkeit. Sie bleiben statisch, aber die zugrundeliegende Domäne der Textdaten verändert sich: Ihre Genauigkeit nimmt aufgrund eines Phänomens ab, das als Konzeptdrift bekannt ist. Die Frage ist ob Konzeptdrift durch die Ausgabe eines Classifiers zuverlässig erkannt werden kann, und falls ja: ist es möglich dem durch nachtrainieren des Classifiers entgegenzuwirken. Es wird eine System-Implementierung mittels Proof-of-Concept vorgestellt, bei der das Konfidenzmass des Classifiers zur Erkennung von Konzeptdrift verwendet wird. Der Classifier wird dann iterativ neu trainiert, indem er Stichproben mit niedrigem Konfidenzmass auswählt, sie korrigiert und im Trainingsset der nächsten Iteration verwendet. Die Leistung des Classifiers wird über die Zeit gemessen, und die Leistung des Systems beobachtet. Basierend darauf werden schließlich Empfehlungen gegeben, die sich bei der Implementierung solcher Systeme als nützlich erweisen können.
  20. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.02
    0.01753412 = product of:
      0.03506824 = sum of:
        0.03506824 = product of:
          0.07013648 = sum of:
            0.07013648 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.07013648 = score(doc=3406,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 5.2010 16:22:35

Languages

  • d 33
  • e 16
  • hu 1
  • More… Less…