Search (1402 results, page 1 of 71)

  • × year_i:[2000 TO 2010}
  1. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.30
    0.2998547 = product of:
      0.39980626 = sum of:
        0.07245665 = product of:
          0.21736994 = sum of:
            0.21736994 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.21736994 = score(doc=306,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.21736994 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.21736994 = score(doc=306,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.10997967 = weight(_text_:logic in 306) [ClassicSimilarity], result of:
          0.10997967 = score(doc=306,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.4663946 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.75 = coord(3/4)
    
    Abstract
    Although service-oriented architectures go a long way toward providing interoperability in distributed, heterogeneous environments, managing semantic differences in such environments remains a challenge. We give an overview of the issue of semantic interoperability (integration), provide a semantic characterization of services, and discuss the role of ontologies. Then we analyze four basic models of semantic interoperability that differ in respect to their mapping between service descriptions and ontologies and in respect to where the evaluation of the integration logic is performed. We also provide some guidelines for selecting one of the possible interoperability models.
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.19
    0.19426394 = product of:
      0.25901857 = sum of:
        0.062105697 = product of:
          0.18631709 = sum of:
            0.18631709 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.18631709 = score(doc=562,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.18631709 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18631709 = score(doc=562,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.010595793 = product of:
          0.031787377 = sum of:
            0.031787377 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.031787377 = score(doc=562,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.75 = coord(3/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.17
    0.1656152 = product of:
      0.3312304 = sum of:
        0.0828076 = product of:
          0.24842279 = sum of:
            0.24842279 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.24842279 = score(doc=140,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.33333334 = coord(1/3)
        0.24842279 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.24842279 = score(doc=140,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
      0.5 = coord(2/4)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  4. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.12
    0.12421139 = product of:
      0.24842279 = sum of:
        0.062105697 = product of:
          0.18631709 = sum of:
            0.18631709 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.18631709 = score(doc=2918,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.18631709 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.18631709 = score(doc=2918,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.5 = coord(2/4)
    
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  5. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.10
    0.1035095 = product of:
      0.207019 = sum of:
        0.05175475 = product of:
          0.15526424 = sum of:
            0.15526424 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.15526424 = score(doc=5895,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.15526424 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.15526424 = score(doc=5895,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.5 = coord(2/4)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  6. Olson, H.A.: How we construct subjects : a feminist analysis (2007) 0.08
    0.08297182 = product of:
      0.16594364 = sum of:
        0.1571138 = weight(_text_:logic in 5588) [ClassicSimilarity], result of:
          0.1571138 = score(doc=5588,freq=8.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.666278 = fieldWeight in 5588, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5588)
        0.008829828 = product of:
          0.026489483 = sum of:
            0.026489483 = weight(_text_:22 in 5588) [ClassicSimilarity], result of:
              0.026489483 = score(doc=5588,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.19345059 = fieldWeight in 5588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5588)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    To organize information, librarians create structures. These structures grow from a logic that goes back at least as far as Aristotle. It is the basis of classification as we practice it, and thesauri and subject headings have developed from it. Feminist critiques of logic suggest that logic is gendered in nature. This article will explore how these critiques play out in contemporary standards for the organization of information. Our widely used classification schemes embody principles such as hierarchical force that conform to traditional/Aristotelian logic. Our subject heading strings follow a linear path of subdivision. Our thesauri break down subjects into discrete concepts. In thesauri and subject heading lists we privilege hierarchical relationships, reflected in the syndetic structure of broader and narrower terms, over all other relationships. Are our classificatory and syndetic structures gendered? Are there other options? Carol Gilligan's In a Different Voice (1982), Women's Ways of Knowing (Belenky, Clinchy, Goldberger, & Tarule, 1986), and more recent related research suggest a different type of structure for women's knowledge grounded in "connected knowing." This article explores current and potential elements of connected knowing in subject access with a focus on the relationships, both paradigmatic and syntagmatic, between concepts.
    Date
    11.12.2019 19:00:22
  7. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.08
    0.0828076 = product of:
      0.1656152 = sum of:
        0.0414038 = product of:
          0.12421139 = sum of:
            0.12421139 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12421139 = score(doc=701,freq=2.0), product of:
                0.33151442 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039102852 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12421139 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12421139 = score(doc=701,freq=2.0), product of:
            0.33151442 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039102852 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  8. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.07
    0.07195565 = product of:
      0.1439113 = sum of:
        0.1333155 = weight(_text_:logic in 2623) [ClassicSimilarity], result of:
          0.1333155 = score(doc=2623,freq=4.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.56535566 = fieldWeight in 2623, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.010595793 = product of:
          0.031787377 = sum of:
            0.031787377 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
              0.031787377 = score(doc=2623,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.23214069 = fieldWeight in 2623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2623)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  9. Losada, D.E.; Barreiro, A.: Emebedding term similarity and inverse document frequency into a logical model of information retrieval (2003) 0.07
    0.069909394 = product of:
      0.13981879 = sum of:
        0.12569106 = weight(_text_:logic in 1422) [ClassicSimilarity], result of:
          0.12569106 = score(doc=1422,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.5330224 = fieldWeight in 1422, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0625 = fieldNorm(doc=1422)
        0.014127724 = product of:
          0.04238317 = sum of:
            0.04238317 = weight(_text_:22 in 1422) [ClassicSimilarity], result of:
              0.04238317 = score(doc=1422,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.30952093 = fieldWeight in 1422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1422)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    We propose a novel approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. The ability of the logic to handle expressive representations along with the use of such classical notions are promising characteristics for IR systems. The approach proposed here has been efficiently implemented and experiments against test collections are presented.
    Date
    22. 3.2003 19:27:23
  10. Jouis, C.: Logic of relationships (2002) 0.06
    0.05996304 = product of:
      0.11992608 = sum of:
        0.11109625 = weight(_text_:logic in 1204) [ClassicSimilarity], result of:
          0.11109625 = score(doc=1204,freq=4.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.47112972 = fieldWeight in 1204, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1204)
        0.008829828 = product of:
          0.026489483 = sum of:
            0.026489483 = weight(_text_:22 in 1204) [ClassicSimilarity], result of:
              0.026489483 = score(doc=1204,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.19345059 = fieldWeight in 1204, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1204)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    A main goal of recent studies in semantics is to integrate into conceptual structures the models of representation used in linguistics, logic, and/or artificial intelligence. A fundamental problem resides in the need to structure knowledge and then to check the validity of constructed representations. We propose associating logical properties with relationships by introducing the relationships into a typed and functional system of specifcations. This makes it possible to compare conceptual representations against the relationships established between the concepts. The mandatory condition to validate such a conceptual representation is consistency. The semantic system proposed is based an a structured set of semantic primitives-types, relations, and properties-based an a global model of language processing, Applicative and Cognitive Grammar (ACG) (Desc16s, 1990), and an extension of this model to terminology (Jouis & Mustafa 1995, 1996, 1997). The ACG postulates three levels of representation of languages, including a cognitive level. At this level, the meanings of lexical predicates are represented by semantic cognitive schemes. From this perspective, we propose a set of semantic concepts, which defines an organized system of meanings. Relations are part of a specification network based an a general terminological scheure (i.e., a coherent system of meanings of relations). In such a system, a specific relation may be characterized as to its: (1) functional type (the semantic type of arguments of the relation); (2) algebraic properties (reflexivity, symmetry, transitivity, etc.); and (3) combinatorial relations with other entities in the same context (for instance, the part of the text where a concept is defined).
    Date
    1.12.2002 11:12:22
  11. Janes, J.: ¬The logic of inference (2001) 0.06
    0.055548124 = product of:
      0.2221925 = sum of:
        0.2221925 = weight(_text_:logic in 4803) [ClassicSimilarity], result of:
          0.2221925 = score(doc=4803,freq=4.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.94225943 = fieldWeight in 4803, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.078125 = fieldNorm(doc=4803)
      0.25 = coord(1/4)
    
    Abstract
    This column continues a series on topics in research methodology, statistics and data analysis techniques for the library and information sciences. It discusses the logic implicit in statistical inference, which underlies many tests and procedures used in quantitative scientific inquiry.
  12. Margaritopoulos, T.; Margaritopoulos, M.; Mavridis, I.; Manitsaris, A.: ¬A conceptual framework for metadata quality assessment (2008) 0.05
    0.052432038 = product of:
      0.104864076 = sum of:
        0.094268285 = weight(_text_:logic in 2643) [ClassicSimilarity], result of:
          0.094268285 = score(doc=2643,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.3997668 = fieldWeight in 2643, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=2643)
        0.010595793 = product of:
          0.031787377 = sum of:
            0.031787377 = weight(_text_:22 in 2643) [ClassicSimilarity], result of:
              0.031787377 = score(doc=2643,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.23214069 = fieldWeight in 2643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2643)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Metadata quality of digital resources in a repository is an issue directly associated with the repository's efficiency and value. In this paper, the subject of metadata quality is approached by introducing a new conceptual framework that defines it in terms of its fundamental components. Additionally, a method for assessing these components by exploiting structural and semantic relations among the resources is presented. These relations can be used to generate implied logic rules, which include, impose or prohibit certain values in the fields of a metadata record. The use of such rules can serve as a tool for conducting quality control in the records, in order to diagnose deficiencies and errors.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  13. Broughton, V.: Henry Evelyn Bliss : the other immortal or a prophet without honour? (2008) 0.05
    0.05085637 = product of:
      0.10171274 = sum of:
        0.08935098 = product of:
          0.26805294 = sum of:
            0.26805294 = weight(_text_:bliss in 2550) [ClassicSimilarity], result of:
              0.26805294 = score(doc=2550,freq=6.0), product of:
                0.27972588 = queryWeight, product of:
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.039102852 = queryNorm
                0.95827 = fieldWeight in 2550, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  7.1535926 = idf(docFreq=93, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2550)
          0.33333334 = coord(1/3)
        0.012361759 = product of:
          0.037085276 = sum of:
            0.037085276 = weight(_text_:22 in 2550) [ClassicSimilarity], result of:
              0.037085276 = score(doc=2550,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.2708308 = fieldWeight in 2550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2550)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The paper takes a retrospective look at the work of Henry Evelyn Bliss, classificationist theorist and author of the Bibliographic Classification. Major features of his writings and philosophy are examined and evaluated for the originality of their contribution to the corpus of knowledge in the discipline. Reactions to Bliss's work are analysed, as is his influence on classification theory of the 20th century. Contemporary work on knowledge organization is seen to continue a number of strands from Bliss's original writings. His standing as a classificationist is compared with that of Ranganathan, with the conclusion that he is not given the credit he deserves.
    Biographed
    Bliss, Henry Evelyn
    Date
    9. 2.1997 18:44:22
  14. Eklund, P.; Groh, B.; Stumme, G.; Wille, R.: ¬A conceptual-logic extension of TOSCANA (2000) 0.05
    0.047134142 = product of:
      0.18853657 = sum of:
        0.18853657 = weight(_text_:logic in 5082) [ClassicSimilarity], result of:
          0.18853657 = score(doc=5082,freq=8.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.7995336 = fieldWeight in 5082, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=5082)
      0.25 = coord(1/4)
    
    Abstract
    The aim of this paper is to indicate how TOSCANA may be extended to allow graphical representations not only of concept lattices but also of concept graphs in the sense of Contextual Logic. The contextual- logic extension of TOSCANA requires the logical scaling of conceptual and relational scales for which we propose the Peircean Algebraic Logic as reconstructed by R. W. Burch. As graphical representations we recommend, besides labelled line diagrams of concept lattices and Sowa's diagrams of conceptual graphs, particular information maps for utilizing background knowledge as much as possible. Our considerations are illustrated by a small information system about the domestic flights in Austria
  15. Levesque, H.J.; Lakemeyer, G.: ¬The logic of knowledge bases (2000) 0.05
    0.047134142 = product of:
      0.18853657 = sum of:
        0.18853657 = weight(_text_:logic in 3838) [ClassicSimilarity], result of:
          0.18853657 = score(doc=3838,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.7995336 = fieldWeight in 3838, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.09375 = fieldNorm(doc=3838)
      0.25 = coord(1/4)
    
  16. Miller, R.: Three problems in logic-based knowledge representation (2006) 0.05
    0.047134142 = product of:
      0.18853657 = sum of:
        0.18853657 = weight(_text_:logic in 660) [ClassicSimilarity], result of:
          0.18853657 = score(doc=660,freq=8.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.7995336 = fieldWeight in 660, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.046875 = fieldNorm(doc=660)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this article is to give a non-technical overview of some of the technical progress made recently on tackling three fundamental problems in the area of formal knowledge representation/artificial intelligence. These are the Frame Problem, the Ramification Problem, and the Qualification Problem. The article aims to describe the development of two logic-based languages, the Event Calculus and Modular-E, to address various aspects of these issues. The article also aims to set this work in the wider context of contemporary developments in applied logic, non-monotonic reasoning and formal theories of common sense. Design/methodology/approach - The study applies symbolic logic to model aspects of human knowledge and reasoning. Findings - The article finds that there are fundamental interdependencies between the three problems mentioned above. The conceptual framework shared by the Event Calculus and Modular-E is appropriate for providing principled solutions to them. Originality/value - This article provides an overview of an important approach to dealing with three fundamental issues in artificial intelligence.
  17. Fuhr, N.: Probabilistic datalog : implementing logical information retrieval for advanced applications (2000) 0.04
    0.0444385 = product of:
      0.177754 = sum of:
        0.177754 = weight(_text_:logic in 4380) [ClassicSimilarity], result of:
          0.177754 = score(doc=4380,freq=4.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.75380754 = fieldWeight in 4380, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0625 = fieldNorm(doc=4380)
      0.25 = coord(1/4)
    
    Abstract
    In the logical approach to information retrieval, retrieval is considered as uncertain inference. Whereas classical IR models are based on propositional logic, we combine Datalog (function-free Horn clause predicate logic) with probability theory. Therefore, probabilistic weights may be attached to both facts and rules. The underlying semantics extends the well-founded semantics of modularly stratified Datalog to a possible worlds semantics
  18. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.04
    0.043693364 = product of:
      0.08738673 = sum of:
        0.0785569 = weight(_text_:logic in 1184) [ClassicSimilarity], result of:
          0.0785569 = score(doc=1184,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.333139 = fieldWeight in 1184, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.008829828 = product of:
          0.026489483 = sum of:
            0.026489483 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.026489483 = score(doc=1184,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.19345059 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
  19. Song, D.; Bruza, P.D.: Towards context sensitive information inference (2003) 0.04
    0.043693364 = product of:
      0.08738673 = sum of:
        0.0785569 = weight(_text_:logic in 1428) [ClassicSimilarity], result of:
          0.0785569 = score(doc=1428,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.333139 = fieldWeight in 1428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1428)
        0.008829828 = product of:
          0.026489483 = sum of:
            0.026489483 = weight(_text_:22 in 1428) [ClassicSimilarity], result of:
              0.026489483 = score(doc=1428,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.19345059 = fieldWeight in 1428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1428)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Humans can make hasty, but generally robust judgements about what a text fragment is, or is not, about. Such judgements are termed information inference. This article furnishes an account of information inference from a psychologistic stance. By drawing an theories from nonclassical logic and applied cognition, an information inference mechanism is proposed that makes inferences via computations of information flow through an approximation of a conceptual space. Within a conceptual space information is represented geometrically. In this article, geometric representations of words are realized as vectors in a high dimensional semantic space, which is automatically constructed from a text corpus. Two approaches were presented for priming vector representations according to context. The first approach uses a concept combination heuristic to adjust the vector representation of a concept in the light of the representation of another concept. The second approach computes a prototypical concept an the basis of exemplar trace texts and moves it in the dimensional space according to the context. Information inference is evaluated by measuring the effectiveness of query models derived by information flow computations. Results show that information flow contributes significantly to query model effectiveness, particularly with respect to precision. Moreover, retrieval effectiveness compares favorably with two probabilistic query models, and another based an semantic association. More generally, this article can be seen as a contribution towards realizing operational systems that mimic text-based human reasoning.
    Date
    22. 3.2003 19:35:46
  20. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.04
    0.043693364 = product of:
      0.08738673 = sum of:
        0.0785569 = weight(_text_:logic in 5290) [ClassicSimilarity], result of:
          0.0785569 = score(doc=5290,freq=2.0), product of:
            0.2358082 = queryWeight, product of:
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.039102852 = queryNorm
            0.333139 = fieldWeight in 5290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.0304604 = idf(docFreq=288, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5290)
        0.008829828 = product of:
          0.026489483 = sum of:
            0.026489483 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
              0.026489483 = score(doc=5290,freq=2.0), product of:
                0.13693152 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039102852 = queryNorm
                0.19345059 = fieldWeight in 5290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Document keyphrases provide a concise summary of a document's content, offering semantic metadata summarizing a document. They can be used in many applications related to knowledge management and text mining, such as automatic text summarization, development of search engines, document clustering, document classification, thesaurus construction, and browsing interfaces. Because only a small portion of documents have keyphrases assigned by authors, and it is time-consuming and costly to manually assign keyphrases to documents, it is necessary to develop an algorithm to automatically generate keyphrases for documents. This paper describes a Keyphrase Identification Program (KIP), which extracts document keyphrases by using prior positive samples of human identified phrases to assign weights to the candidate keyphrases. The logic of our algorithm is: The more keywords a candidate keyphrase contains and the more significant these keywords are, the more likely this candidate phrase is a keyphrase. KIP's learning function can enrich the glossary database by automatically adding new identified keyphrases to the database. KIP's personalization feature will let the user build a glossary database specifically suitable for the area of his/her interest. The evaluation results show that KIP's performance is better than the systems we compared to and that the learning function is effective.
    Date
    22. 7.2006 17:25:48

Languages

Types

  • a 1164
  • m 168
  • el 67
  • s 60
  • b 26
  • x 13
  • i 9
  • n 2
  • r 2
  • More… Less…

Themes

Subjects

Classifications