Search (853 results, page 1 of 43)

  • × year_i:[2000 TO 2010}
  • × language_ss:"e"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.43
    0.43478474 = product of:
      0.6086986 = sum of:
        0.059350993 = product of:
          0.17805298 = sum of:
            0.17805298 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.17805298 = score(doc=562,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.17805298 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17805298 = score(doc=562,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.17805298 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17805298 = score(doc=562,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.17805298 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17805298 = score(doc=562,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.015188723 = product of:
          0.030377446 = sum of:
            0.030377446 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.030377446 = score(doc=562,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.71428573 = coord(5/7)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.40
    0.3956733 = product of:
      0.69242823 = sum of:
        0.06924283 = product of:
          0.20772848 = sum of:
            0.20772848 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.20772848 = score(doc=306,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.20772848 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20772848 = score(doc=306,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.20772848 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20772848 = score(doc=306,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.20772848 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20772848 = score(doc=306,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.5714286 = coord(4/7)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  3. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.34
    0.33914855 = product of:
      0.5935099 = sum of:
        0.059350993 = product of:
          0.17805298 = sum of:
            0.17805298 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.17805298 = score(doc=2918,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.17805298 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.17805298 = score(doc=2918,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.17805298 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.17805298 = score(doc=2918,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.17805298 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.17805298 = score(doc=2918,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.5714286 = coord(4/7)
    
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  4. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.23
    0.22609904 = product of:
      0.3956733 = sum of:
        0.03956733 = product of:
          0.11870199 = sum of:
            0.11870199 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.11870199 = score(doc=701,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.11870199 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.11870199 = score(doc=701,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.11870199 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.11870199 = score(doc=701,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.11870199 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.11870199 = score(doc=701,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5714286 = coord(4/7)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  5. Levinson, R.: Symmetry and the computation of conceptual structures (2000) 0.03
    0.032156922 = product of:
      0.112549216 = sum of:
        0.09482904 = weight(_text_:interpretation in 5081) [ClassicSimilarity], result of:
          0.09482904 = score(doc=5081,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 5081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5081)
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 5081) [ClassicSimilarity], result of:
              0.035440356 = score(doc=5081,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 5081, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5081)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The discovery and exploitation of symmetry plays a major role in sciences such as crystallography, quantum theory, condensedmatter physics, thermodynamics, chemistry, biology and others. It then should not be surprising then, since Conceptual Structures are proposed as a universal knowledge representation scheme, that symmetry should play a role in their interpretation and their application. In this tutorial style paper, we illustrate the role of symmetry in Conceptual Structures and how algorithms may be constructed that exploit this symmetry in order to achieve computational efficiency
    Date
    3. 9.2000 19:22:45
  6. Morris, J.: Individual differences in the interpretation of text : implications for information science (2009) 0.03
    0.025964592 = product of:
      0.18175213 = sum of:
        0.18175213 = weight(_text_:interpretation in 3318) [ClassicSimilarity], result of:
          0.18175213 = score(doc=3318,freq=10.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.84909815 = fieldWeight in 3318, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=3318)
      0.14285715 = coord(1/7)
    
    Abstract
    Many tasks in library and information science (e.g., indexing, abstracting, classification, and text analysis techniques such as discourse and content analysis) require text meaning interpretation, and, therefore, any individual differences in interpretation are relevant and should be considered, especially for applications in which these tasks are done automatically. This article investigates individual differences in the interpretation of one aspect of text meaning that is commonly used in such automatic applications: lexical cohesion and lexical semantic relations. Experiments with 26 participants indicate an approximately 40% difference in interpretation. In total, 79, 83, and 89 lexical chains (groups of semantically related words) were analyzed in 3 texts, respectively. A major implication of this result is the possibility of modeling individual differences for individual users. Further research is suggested for different types of texts and readers than those used here, as well as similar research for different aspects of text meaning.
  7. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.03
    0.02526938 = product of:
      0.17688565 = sum of:
        0.17688565 = sum of:
          0.116130754 = weight(_text_:anwendung in 3895) [ClassicSimilarity], result of:
            0.116130754 = score(doc=3895,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.6418954 = fieldWeight in 3895, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.09375 = fieldNorm(doc=3895)
          0.06075489 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
            0.06075489 = score(doc=3895,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.46428138 = fieldWeight in 3895, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=3895)
      0.14285715 = coord(1/7)
    
    Date
    24. 8.2005 19:20:22
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Frâncu, V.: ¬An interpretation of the FRBR model (2004) 0.02
    0.024788357 = product of:
      0.08675925 = sum of:
        0.07663343 = weight(_text_:interpretation in 2647) [ClassicSimilarity], result of:
          0.07663343 = score(doc=2647,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.35801122 = fieldWeight in 2647, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=2647)
        0.010125816 = product of:
          0.020251632 = sum of:
            0.020251632 = weight(_text_:22 in 2647) [ClassicSimilarity], result of:
              0.020251632 = score(doc=2647,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.15476047 = fieldWeight in 2647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2647)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Despite the existence of a logical structural model for bibliographic records which integrates any record type, library catalogues persist in offering catalogue records at the level of 'items'. Such records however, do not clearly indicate which works they contain. Hence the search possibilities of the end user are unduly limited. The Functional Requirements for Bibliographic Records (FRBR) present through a conceptual model, independent of any cataloguing code or implementation, a globalized view of the bibliographic universe. This model, a synthesis of the existing cataloguing rules, consists of clearly structured entities and well defined types of relationships among them. From a theoretical viewpoint, the model is likely to be a good knowledge organiser with great potential in identifying the author and the work represented by an item or publication and is able to link different works of the author with different editions, translations or adaptations of those works aiming at better answering the user needs. This paper is presenting an interpretation of the FRBR model opposing it to a traditional bibliographic record of a complex library material.
    Date
    17. 6.2015 14:40:22
  9. Johnson, E.H.: Objects for distributed heterogeneous information retrieval (2000) 0.02
    0.022969227 = product of:
      0.08039229 = sum of:
        0.067735024 = weight(_text_:interpretation in 6959) [ClassicSimilarity], result of:
          0.067735024 = score(doc=6959,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 6959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.01265727 = product of:
          0.02531454 = sum of:
            0.02531454 = weight(_text_:22 in 6959) [ClassicSimilarity], result of:
              0.02531454 = score(doc=6959,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.19345059 = fieldWeight in 6959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6959)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The success of the World Wide Web Shows that we can access, search, and retrieve information from globally distributed databases. lf a database, such as a library catalog, has some sort of Web-based front end, we can type its URL into a Web browser and use its HTML-based forms to search for items in that database. Depending an how well the query conforms to the database content, how the search engine interprets the query, and how the server formats the results into HTML, we might actually find something usable. While the first two issues depend an ourselves and the server, an the Web the latter falls to the mercy of HTML, which we all know as a great destroyer of information because it codes for display but not for content description. When looking at an HTML-formatted display, we must depend an our own interpretation to recognize such entities as author names, titles, and subject identifiers. The Web browser can do nothing but display the information. lf we want some other view of the result, such as sorting the records by date (provided it offers such an option to begin with), the server must do it. This makes poor use of the computing power we have at the desktop (or even laptop), which, unless it involves retrieving more records, could easily do the result Set manipulation that we currently send back to the server. Despite having personal computers wich immense computational power, as far as information retrieval goes, we still essentially use them as dumb terminals.
    Date
    22. 9.1997 19:16:05
  10. Bartlett, J.C.; Toms, E.G.: Developing a protocol for bioinformatics analysis : an integrated information behavior and task analysis approach (2005) 0.02
    0.022969227 = product of:
      0.08039229 = sum of:
        0.067735024 = weight(_text_:interpretation in 5256) [ClassicSimilarity], result of:
          0.067735024 = score(doc=5256,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 5256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5256)
        0.01265727 = product of:
          0.02531454 = sum of:
            0.02531454 = weight(_text_:22 in 5256) [ClassicSimilarity], result of:
              0.02531454 = score(doc=5256,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.19345059 = fieldWeight in 5256, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5256)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The purpose of this research is to capture, understand, and model the process used by bioinformatics analysts when facing a specific scientific problem. Integrating information behavior with task analysis, we interviewed 20 bioinformatics experts about the process they follow to conduct a typical bioinformatics analysis - a functional analysis of a gene, and then used a task analysis approach to model that process. We found that each expert followed a unique process in using bioinformatics resources, but had significant similarities with their peers. We synthesized these unique processes into a standard research protocol, from which we developed a procedural model that describes the process of conducting a functional analysis of a gene. The model protocol consists of a series of 16 individual steps, each of which specifies detail for the type of analysis, how and why it is conducted, the tools used, the data input and output, and the interpretation of the results. The linking of information behavior and task analysis research is a novel approach, as it provides a rich high-level view of information behavior while providing a detailed analysis at the task level. In this article we concentrate on the latter.
    Date
    22. 7.2006 14:28:55
  11. Procházka, D.: ¬The development of uniform titles for choreographic works (2006) 0.02
    0.021895267 = product of:
      0.15326686 = sum of:
        0.15326686 = weight(_text_:interpretation in 223) [ClassicSimilarity], result of:
          0.15326686 = score(doc=223,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.71602243 = fieldWeight in 223, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0625 = fieldNorm(doc=223)
      0.14285715 = coord(1/7)
    
    Abstract
    In 1994, the Library of Congress issued a rule interpretation to AACR2 detailing how uniform titles for choreographic works should be established. The value of the rule interpretation is discussed, and it is contrasted with prior practices. The origins of the concept behind the rule are traced back to the New York Public Library in the mid twentieth century, and its evolution into the current guidelines is delineated.
  12. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.021057816 = product of:
      0.1474047 = sum of:
        0.1474047 = sum of:
          0.09677563 = weight(_text_:anwendung in 539) [ClassicSimilarity], result of:
            0.09677563 = score(doc=539,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.5349128 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
          0.05062908 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
            0.05062908 = score(doc=539,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.38690117 = fieldWeight in 539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=539)
      0.14285715 = coord(1/7)
    
    Date
    26.12.2011 13:22:07
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  13. Rindflesch, T.C.; Fizsman, M.: The interaction of domain knowledge and linguistic structure in natural language processing : interpreting hypernymic propositions in biomedical text (2003) 0.02
    0.019352864 = product of:
      0.13547005 = sum of:
        0.13547005 = weight(_text_:interpretation in 2097) [ClassicSimilarity], result of:
          0.13547005 = score(doc=2097,freq=8.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.6328804 = fieldWeight in 2097, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2097)
      0.14285715 = coord(1/7)
    
    Abstract
    Interpretation of semantic propositions in free-text documents such as MEDLINE citations would provide valuable support for biomedical applications, and several approaches to semantic interpretation are being pursued in the biomedical informatics community. In this paper, we describe a methodology for interpreting linguistic structures that encode hypernymic propositions, in which a more specific concept is in a taxonomic relationship with a more general concept. In order to effectively process these constructions, we exploit underspecified syntactic analysis and structured domain knowledge from the Unified Medical Language System (UMLS). After introducing the syntactic processing on which our system depends, we focus on the UMLS knowledge that supports interpretation of hypernymic propositions. We first use semantic groups from the Semantic Network to ensure that the two concepts involved are compatible; hierarchical information in the Metathesaurus then determines which concept is more general and which more specific. A preliminary evaluation of a sample based on the semantic group Chemicals and Drugs provides 83% precision. An error analysis was conducted and potential solutions to the problems encountered are presented. The research discussed here serves as a paradigm for investigating the interaction between domain knowledge and linguistic structure in natural language processing, and could also make a contribution to research on automatic processing of discourse structure. Additional implications of the system we present include its integration in advanced semantic interpretation processors for biomedical text and its use for information extraction in specific domains. The approach has the potential to support a range of applications, including information retrieval and ontology engineering.
  14. Bean, C.A.: Representation of medical knowledge for automated semantic interpretation of clinical reports (2004) 0.02
    0.018961856 = product of:
      0.13273299 = sum of:
        0.13273299 = weight(_text_:interpretation in 2660) [ClassicSimilarity], result of:
          0.13273299 = score(doc=2660,freq=12.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.6200936 = fieldWeight in 2660, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=2660)
      0.14285715 = coord(1/7)
    
    Abstract
    A set of cardiac catheterisation case reports was analysed to identify and encode for automated interpretation of the semantic indicators of location and severity of disease in coronary arteries. Presence of disease was indicated by the use of specific or general disease terms, typically with a modifier, while absence of disease was indicated by negation of similar phrases. Disease modifiers indicating severity could be qualitative or quantitative, and a 7-point severity scale was devised to normalise these modifiers based an relative clinical significance. Location of disease was indicated in three basic ways: By situation in arbitrary topographic divisions, by situation relative to a named structure, or by using named structures as boundary delimiters to describe disease extent. In addition, semantic indicators were identified for such topological relationships as proximity, contiguity, overlap, and enclosure. Spatial reasoning was often necessary to understand the specific localisation of disease, demonstrating the need for a general Spatial extension to the underlying knowledge base.
    Content
    1. Introduction In automated semantic interpretation, the expressions in natural language text are mapped to a knowledge model, thus providing a means of normalising the relevant concepts and relationships encountered. However, the ultimate goal of comprehensive and consistent semantic interpretation of unrestrained text, even within a single domain such as medicine, is still beyond the current state of the art of natural language processing. In order to scale back the complexity of the task of automated semantic interpretation, we have restricted our domain of interest to coronary artery anatomy and our text to cardiac catheterisation reports. Using a multi-phased approach, a staged series of projects is enhancing the development of a semantic interpretation system for free clinical text in the specific subdomain of coronary arteriography.
  15. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.02
    0.018375382 = product of:
      0.06431384 = sum of:
        0.05418802 = weight(_text_:interpretation in 2752) [ClassicSimilarity], result of:
          0.05418802 = score(doc=2752,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.25315216 = fieldWeight in 2752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=2752)
        0.010125816 = product of:
          0.020251632 = sum of:
            0.020251632 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
              0.020251632 = score(doc=2752,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.15476047 = fieldWeight in 2752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2752)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    We report data fusion experiments carried out on the four best-performing retrieval models from TREC 5. Three were conceptually/algorithmically very different from one another; one was algorithmically similar to one of the former. The objective of the test was to observe the performance of the 11 logical data fusion combinations compared to the performance of the four individual models and their intermediate fusions when following the principle of polyrepresentation. This principle is based on cognitive IR perspective (Ingwersen & Järvelin, 2005) and implies that each retrieval model is regarded as a representation of a unique interpretation of information retrieval (IR). It predicts that only fusions of very different, but equally good, IR models may outperform each constituent as well as their intermediate fusions. Two kinds of experiments were carried out. One tested restricted fusions, which entails that only the inner disjoint overlap documents between fused models are ranked. The second set of experiments was based on traditional data fusion methods. The experiments involved the 30 TREC 5 topics that contain more than 44 relevant documents. In all tests, the Borda and CombSUM scoring methods were used. Performance was measured by precision and recall, with document cutoff values (DCVs) at 100 and 15 documents, respectively. Results show that restricted fusions made of two, three, or four cognitively/algorithmically very different retrieval models perform significantly better than do the individual models at DCV100. At DCV15, however, the results of polyrepresentative fusion were less predictable. The traditional fusion method based on polyrepresentation principles demonstrates a clear picture of performance at both DCV levels and verifies the polyrepresentation predictions for data fusion in IR. Data fusion improves retrieval performance over their constituent IR models only if the models all are quite conceptually/algorithmically dissimilar and equally and well performing, in that order of importance.
    Date
    22. 3.2009 18:48:28
  16. Hochheiser, H.; Shneiderman, B.: Using interactive visualizations of WWW log data to characterize access patterns and inform site design (2001) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 5765) [ClassicSimilarity], result of:
          0.11495014 = score(doc=5765,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 5765, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=5765)
      0.14285715 = coord(1/7)
    
    Abstract
    HTTP server log files provide Web site operators with substantial detail regarding the visitors to their sites. Interest in interpreting this data has spawned an active market for software packages that summarize and analyze this data, providing histograms, pie graphs, and other charts summarizing usage patterns. Although useful, these summaries obscure useful information and restrict users to passive interpretation of static displays. Interactive visualizations can be used to provide users with greater abilities to interpret and explore Web log data. By combining two-dimensional displays of thousands of individual access requests, color, and size coding for additional attributes, and facilities for zooming and filtering, these visualizations provide capabilities for examining data that exceed those of traditional Web log analysis tools. We introduce a series of interactive visualizations that can be used to explore server data across various dimensions. Possible uses of these visualizations are discussed, and difficulties of data collection, presentation, and interpretation are explored
  17. Rafferty, P.; Hidderley, R.: ¬A survey of Image trieval tools (2004) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 2670) [ClassicSimilarity], result of:
          0.11495014 = score(doc=2670,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 2670, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=2670)
      0.14285715 = coord(1/7)
    
    Abstract
    Issues regarding interpretation and the locus of meaning in the image sign (objectivist, constructionist or subjectivist) are clearly important in relation to reading images and are well documented in the literature (Svenonius, 1994; Shatford, 1984,1986; Layne, 1994; Enser, 1991, 1995; Rafferty Brown & Hidderley, 1996). The same issues of interpretation and reading pertain to image indexing tools which themselves are the result of choice, design and construction. Indexing becomes constrained and specific when a particular controlled vocabulary is adhered to. Indexing tools can often work better for one type of document than another. In this paper we discuss the different 'flavours' of three image retrieval tools: the Art and Architecture Thesaurus, Iconclass and the Library of Congress Thesaurus for Graphic Materials.
  18. Dominich, S.; Skrop, A.: PageRank and interaction information retrieval (2005) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 3268) [ClassicSimilarity], result of:
          0.11495014 = score(doc=3268,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 3268, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=3268)
      0.14285715 = coord(1/7)
    
    Abstract
    The PageRank method is used by the Google Web search engine to compute the importance of Web pages. Two different views have been developed for the Interpretation of the PageRank method and values: (a) stochastic (random surfer): the PageRank values can be conceived as the steady-state distribution of a Markov chain, and (b) algebraic: the PageRank values form the eigenvector corresponding to eigenvalue 1 of the Web link matrix. The Interaction Information Retrieval (1**2 R) method is a nonclassical information retrieval paradigm, which represents a connectionist approach based an dynamic systems. In the present paper, a different Interpretation of PageRank is proposed, namely, a dynamic systems viewpoint, by showing that the PageRank method can be formally interpreted as a particular case of the Interaction Information Retrieval method; and thus, the PageRank values may be interpreted as neutral equilibrium points of the Web.
  19. Thelwall, M.; Vann, K.; Fairclough, R.: Web issue analysis : an integrated water resource management case study (2006) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 5906) [ClassicSimilarity], result of:
          0.11495014 = score(doc=5906,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 5906, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=5906)
      0.14285715 = coord(1/7)
    
    Abstract
    In this article Web issue analysis is introduced as a new technique to investigate an issue as reflected on the Web. The issue chosen, integrated water resource management (IWRM), is a United Nations-initiated paradigm for managing water resources in an international context, particularly in developing nations. As with many international governmental initiatives, there is a considerable body of online information about it: 41.381 hypertext markup language (HTML) pages and 28.735 PDF documents mentioning the issue were downloaded. A page uniform resource locator (URL) and link analysis revealed the international and sectoral spread of IWRM. A noun and noun phrase occurrence analysis was used to identify the issues most commonly discussed, revealing some unexpected topics such as private sector and economic growth. Although the complexity of the methods required to produce meaningful statistics from the data is disadvantageous to easy interpretation, it was still possible to produce data that could be subject to a reasonably intuitive interpretation. Hence Web issue analysis is claimed to be a useful new technique for information science.
  20. Wissensorganisation in kooperativen Lern- und Arbeitsumgebungen : Proceedings der 8. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Regensburg, 9.-11. Oktober 2002 (2004) 0.02
    0.015759248 = product of:
      0.055157363 = sum of:
        0.040641017 = weight(_text_:interpretation in 5864) [ClassicSimilarity], result of:
          0.040641017 = score(doc=5864,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.18986413 = fieldWeight in 5864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0234375 = fieldNorm(doc=5864)
        0.014516344 = product of:
          0.029032689 = sum of:
            0.029032689 = weight(_text_:anwendung in 5864) [ClassicSimilarity], result of:
              0.029032689 = score(doc=5864,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.16047385 = fieldWeight in 5864, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=5864)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Mit der Entwicklung von Wissen und weltweiter Kommunikation kommt der Wissensorganisation zunehmend eine Schlüsselrolle zu. Einerseits geht es darum, zu verstehen, was Wissen ist und wie es strukturiert ist, andererseits möchte man die Technik der elektronischen Darstellung und Wiederauffindung des Wissens über den gegenwärtigen Stand hinaus weiterentwickeln. Dabei geht es um vielfältige Anwendungen, z. B. Wissensvertextung, Forschungsunterstützung, Bereitstellung von Wissen in Arbeits- und Entscheidungsprozessen, Weiterbildung, Ordnung, Wissenverknüpfung, Innovationsförderung und anderes. Schwerpunkt der Fachtagung Wissensorganisation 2002 sollte darum unter dem Motto "Wissensorganisation in kooperativen Lern- und Arbeitsumgebungen", in Weiterführung ähnlicher Themen von den beiden vorhergehenden Tagungen, Methoden der Wissensorganisation und den Nutzen ihrer Anwendung im Rahmen von eLearning-Aktivitäten beleuchten sowie andererseits eLearning-Methoden für die Wissensorganisation aufgreifen. Didaktische Modelle wie etwa die Lernontologien stehen dabei ebenso zur Debatte wie Anwendungen von Werkzeugen der Wissensmodellierung oder der begrifflichen Wissensstrukturierung. Ziel soll es sein, den Beitrag der Wissensorganisation zur Entwicklung von Arbeitstechniken und neuer Lernkulturen herauszuarbeiten und gleichzeitig auch didaktische Konzepte für die Wissensorganisation fruchtbar zu machen. Folgende Thematiken können die Ausrichtung dieses Vorhabens beispielhaft skizzieren: - Terminologische Kontrolle in Online-Lernumgebungen - Wie zu organisieren ist (zu Verfahren der Wissensanordnung) - Grundlagen für die Gestaltung von Wissensorganisations- und Lernsystem - Der Benutzer als Lerner - der Lerner als Benutzer - Lehrer als Autoren (der Blick auf den Wissensproduzenten) Die Thematisierung praktischer Arbeitsfelder und Werkzeuge, z. B. die Metadatenorganisation mit XML, werden ergänzt durch vertiefende Reflexion über Wissen. Damit sollen Anregungen zur Konzeption neuer Wissenssysteme zusammengetragen werden. Hier stellen sich Fragen nach der Zerlegbarkeit des Wissens, der Bestimmung der Wissenseinheiten, der Sprachinvarianz des Wissens, der Wissensformalisierung, der punktgenauen Wissensbereitstellung zu spezifischen Problemen usw. Auch zu der Aufgabe, die Gesamtheit und Ganzheit des Wissens zu gewährleisten, werden Antworten gesucht. Der vorliegende Band enthält 20 Beiträge, inklusive drei ausgearbeiteten Versionen von Vorträgen, die zwar auf der 7. Deutschen ISKO Tagung 2001 in Berlin gehalten wurden, aber sich in das hier vorgehaltene Spektrum gut einpassen (von Maik Adomßent zu Lernenden Verwaltungen, von Alfred Gerstenkorn zu Verstehensmanagement und von Christina Rautenstrauch zu Tele-Tutoring). Weiter ist ein Beitrag von Thomas Sporer hinzugefügt worden, der die während der Tagung durchgeführte Video-Dokumentation beleuchtet, sowie ein Beitrag von Peter Ohly zu Semantischen Karten, der im Programm der vorherigen Tagung ausgewiesen war, aber wegen Programmänderungen erst 2002 in Regensburg vorgetragen wurde. Der Vortrag von Norbert Meder zu Metadaten für Lernende Verwaltungen wird 2004 in der Festschrift für Klaus Peter Treumann (Uni Bielefeld) veröffentlicht werden und der Beitrag von Christian Swertz zu Kooperativer Autorenschaft ist zu einem späteren Zeitpunkt zur Veröffentlichung vorgesehen.
    Content
    3. Kooperative Arbeitsumgebungen Maik ADOMßENT (von Berlin 2001): Gestaltungspotenziale kollaborativer Wissensnetzwerke in "Lernenden Verwaltungen" am Beispiel des praxisbezogenen Online-Kurses "Projektmanagement" der Universität Lüneburg S.123 Andreas WENDT: Standardisierungen im E-Learning-Bereich zur Unterstützung der automatisierten Komposition von Lernmaterialien S.133 Katja MRUCK, Marion NIEHOFF, Guenter MEY: Forschungsunterstützung in kooperativen Lernumgebungen: Das Beispiel der "Projektwerkstatt Qualitativen Arbeitens" als Offline- und Online-Begleitkonzept S.143 Irmhild ROGULLA, Mirko PREHN: Arbeitsprozessorientierte Weiterbildung: Prozess-Systematik als Basis für Informationsaneignung, Wissenserwerb und Kompetenzentwicklung S.151 4. Wissensmanagement und Informationsdesign Alexander SIGEL: Wissensmanagement in der Praxis: Wann, wie und warum hilft dort Wissensorganisation (nicht)? S.163 Johannes GADNER, Doris OHNESORGE, Tine ADLER, Renate BUBER: Repräsentation und Organisation von Wissen zur Entscheidungsunterstützung im Management S.175 Kerstin ZIMMERMANN: Die Anforderungen an ein wissenschaftliches Informationsportal für die Telekommunikation S.187 Philip ZERWECK: Gestaltung und Erstellung komplexer Informationsangebote im Web S.197 H. Peter OHLY (von Belin 2001): Erstellung und Interpretation von semantischen Karten am Beispiel des Themas "Soziologische Beratung" S.205 Thomas SPORER, Anton KÖSTLBACHER: Digitale Dokumentation von wissenschaftlichen Veranstaltungen S.219

Languages

Types

  • a 707
  • m 86
  • el 64
  • s 44
  • b 25
  • x 6
  • n 4
  • r 2
  • i 1
  • More… Less…

Themes

Subjects

Classifications