Search (589 results, page 1 of 30)

  • × theme_ss:"Wissensrepräsentation"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.07
    0.06503825 = product of:
      0.1300765 = sum of:
        0.037189603 = product of:
          0.11156881 = sum of:
            0.11156881 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.11156881 = score(doc=701,freq=2.0), product of:
                0.29777196 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035122856 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.014350493 = weight(_text_:information in 701) [ClassicSimilarity], result of:
          0.014350493 = score(doc=701,freq=18.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.23274568 = fieldWeight in 701, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.014477162 = weight(_text_:for in 701) [ClassicSimilarity], result of:
          0.014477162 = score(doc=701,freq=14.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.21953502 = fieldWeight in 701, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.024438193 = weight(_text_:the in 701) [ClassicSimilarity], result of:
          0.024438193 = score(doc=701,freq=80.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.44099852 = fieldWeight in 701, product of:
              8.944272 = tf(freq=80.0), with freq of:
                80.0 = termFreq=80.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.015182858 = weight(_text_:of in 701) [ClassicSimilarity], result of:
          0.015182858 = score(doc=701,freq=32.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.27643585 = fieldWeight in 701, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.024438193 = weight(_text_:the in 701) [ClassicSimilarity], result of:
          0.024438193 = score(doc=701,freq=80.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.44099852 = fieldWeight in 701, product of:
              8.944272 = tf(freq=80.0), with freq of:
                80.0 = termFreq=80.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(6/12)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Waard, A. de; Fluit, C.; Harmelen, F. van: Drug Ontology Project for Elsevier (DOPE) (2007) 0.06
    0.06374231 = product of:
      0.10927254 = sum of:
        0.0082198065 = product of:
          0.024659418 = sum of:
            0.024659418 = weight(_text_:f in 758) [ClassicSimilarity], result of:
              0.024659418 = score(doc=758,freq=2.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.17614852 = fieldWeight in 758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03125 = fieldNorm(doc=758)
          0.33333334 = coord(1/3)
        0.015865067 = weight(_text_:information in 758) [ClassicSimilarity], result of:
          0.015865067 = score(doc=758,freq=22.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.25731003 = fieldWeight in 758, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
        0.014477162 = weight(_text_:for in 758) [ClassicSimilarity], result of:
          0.014477162 = score(doc=758,freq=14.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.21953502 = fieldWeight in 758, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
        0.016842863 = weight(_text_:the in 758) [ClassicSimilarity], result of:
          0.016842863 = score(doc=758,freq=38.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.30393726 = fieldWeight in 758, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
        0.015182858 = weight(_text_:of in 758) [ClassicSimilarity], result of:
          0.015182858 = score(doc=758,freq=32.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.27643585 = fieldWeight in 758, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
        0.016842863 = weight(_text_:the in 758) [ClassicSimilarity], result of:
          0.016842863 = score(doc=758,freq=38.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.30393726 = fieldWeight in 758, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
        0.021841917 = product of:
          0.043683834 = sum of:
            0.043683834 = weight(_text_:communities in 758) [ClassicSimilarity], result of:
              0.043683834 = score(doc=758,freq=2.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.23444878 = fieldWeight in 758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.03125 = fieldNorm(doc=758)
          0.5 = coord(1/2)
      0.5833333 = coord(7/12)
    
    Abstract
    Innovative research institutes rely on the availability of complete and accurate information about new research and development, and it is the business of information providers such as Elsevier to provide the required information in a cost-effective way. It is very likely that the semantic web will make an important contribution to this effort, since it facilitates access to an unprecedented quantity of data. However, with the unremitting growth of scientific information, integrating access to all this information remains a significant problem, not least because of the heterogeneity of the information sources involved - sources which may use different syntactic standards (syntactic heterogeneity), organize information in very different ways (structural heterogeneity) and even use different terminologies to refer to the same information (semantic heterogeneity). The ability to address these different kinds of heterogeneity is the key to integrated access. Thesauri have already proven to be a core technology to effective information access as they provide controlled vocabularies for indexing information, and thereby help to overcome some of the problems of free-text search by relating and grouping relevant terms in a specific domain. However, currently there is no open architecture which supports the use of these thesauri for querying other data sources. For example, when we move from the centralized and controlled use of EMTREE within EMBASE.com to a distributed setting, it becomes crucial to improve access to the thesaurus by means of a standardized representation using open data standards that allow for semantic qualifications. In general, mental models and keywords for accessing data diverge between subject areas and communities, and so many different ontologies have been developed. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. The aim of the DOPE project (Drug Ontology Project for Elsevier) is to investigate the possibility of providing access to multiple information sources in the area of life science through a single interface.
  3. Davies, J.; Duke, A.; Stonkus, A.: OntoShare: evolving ontologies in a knowledge sharing system (2004) 0.06
    0.062889285 = product of:
      0.12577857 = sum of:
        0.012556681 = weight(_text_:information in 4409) [ClassicSimilarity], result of:
          0.012556681 = score(doc=4409,freq=18.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.20365247 = fieldWeight in 4409, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4409)
        0.012667517 = weight(_text_:for in 4409) [ClassicSimilarity], result of:
          0.012667517 = score(doc=4409,freq=14.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.19209315 = fieldWeight in 4409, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4409)
        0.017568272 = weight(_text_:the in 4409) [ClassicSimilarity], result of:
          0.017568272 = score(doc=4409,freq=54.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3170276 = fieldWeight in 4409, product of:
              7.3484693 = tf(freq=54.0), with freq of:
                54.0 = termFreq=54.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4409)
        0.0148530835 = weight(_text_:of in 4409) [ClassicSimilarity], result of:
          0.0148530835 = score(doc=4409,freq=40.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.2704316 = fieldWeight in 4409, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4409)
        0.017568272 = weight(_text_:the in 4409) [ClassicSimilarity], result of:
          0.017568272 = score(doc=4409,freq=54.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3170276 = fieldWeight in 4409, product of:
              7.3484693 = tf(freq=54.0), with freq of:
                54.0 = termFreq=54.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4409)
        0.05056475 = product of:
          0.1011295 = sum of:
            0.1011295 = weight(_text_:communities in 4409) [ClassicSimilarity], result of:
              0.1011295 = score(doc=4409,freq=14.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.54275656 = fieldWeight in 4409, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4409)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    We saw in the introduction how the Semantic Web makes possible a new generation of knowledge management tools. We now turn our attention more specifically to Semantic Web based support for virtual communities of practice. The notion of communities of practice has attracted much attention in the field of knowledge management. Communities of practice are groups within (or sometimes across) organizations who share a common set of information needs or problems. They are typically not a formal organizational unit but an informal network, each sharing in part a common agenda and shared interests or issues. In one example it was found that a lot of knowledge sharing among copier engineers took place through informal exchanges, often around a water cooler. As well as local, geographically based communities, trends towards flexible working and globalisation have led to interest in supporting dispersed communities using Internet technology. The challenge for organizations is to support such communities and make them effective. Provided with an ontology meeting the needs of a particular community of practice, knowledge management tools can arrange knowledge assets into the predefined conceptual classes of the ontology, allowing more natural and intuitive access to knowledge. Knowledge management tools must give users the ability to organize information into a controllable asset. Building an intranet-based store of information is not sufficient for knowledge management; the relationships within the stored information are vital. These relationships cover such diverse issues as relative importance, context, sequence, significance, causality and association. The potential for knowledge management tools is vast; not only can they make better use of the raw information already available, but they can sift, abstract and help to share new information, and present it to users in new and compelling ways.
    In this chapter, we describe the OntoShare system which facilitates and encourages the sharing of information between communities of practice within (or perhaps across) organizations and which encourages people - who may not previously have known of each other's existence in a large organization - to make contact where there are mutual concerns or interests. As users contribute information to the community, a knowledge resource annotated with meta-data is created. Ontologies defined using the resource description framework (RDF) and RDF Schema (RDFS) are used in this process. RDF is a W3C recommendation for the formulation of meta-data for WWW resources. RDF(S) extends this standard with the means to specify domain vocabulary and object structures - that is, concepts and the relationships that hold between them. In the next section, we describe in detail the way in which OntoShare can be used to share and retrieve knowledge and how that knowledge is represented in an RDF-based ontology. We then proceed to discuss in Section 10.3 how the ontologies in OntoShare evolve over time based on user interaction with the system and motivate our approach to user-based creation of RDF-annotated information resources. The way in which OntoShare can help to locate expertise within an organization is then described, followed by a discussion of the sociotechnical issues of deploying such a tool. Finally, a planned evaluation exercise and avenues for further research are outlined.
    Source
    Towards the semantic Web: ontology-driven knowledge management. Eds.: J. Davies, u.a
  4. Giunchiglia, F.; Dutta, B.; Maltese, V.: From knowledge organization to knowledge representation (2014) 0.06
    0.06043799 = product of:
      0.10360798 = sum of:
        0.010274758 = product of:
          0.030824272 = sum of:
            0.030824272 = weight(_text_:f in 1369) [ClassicSimilarity], result of:
              0.030824272 = score(doc=1369,freq=2.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.22018565 = fieldWeight in 1369, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1369)
          0.33333334 = coord(1/3)
        0.005979372 = weight(_text_:information in 1369) [ClassicSimilarity], result of:
          0.005979372 = score(doc=1369,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.09697737 = fieldWeight in 1369, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
        0.00967296 = weight(_text_:for in 1369) [ClassicSimilarity], result of:
          0.00967296 = score(doc=1369,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.14668301 = fieldWeight in 1369, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
        0.018072287 = weight(_text_:the in 1369) [ClassicSimilarity], result of:
          0.018072287 = score(doc=1369,freq=28.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3261228 = fieldWeight in 1369, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
        0.014233928 = weight(_text_:of in 1369) [ClassicSimilarity], result of:
          0.014233928 = score(doc=1369,freq=18.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.25915858 = fieldWeight in 1369, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
        0.018072287 = weight(_text_:the in 1369) [ClassicSimilarity], result of:
          0.018072287 = score(doc=1369,freq=28.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3261228 = fieldWeight in 1369, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
        0.027302396 = product of:
          0.05460479 = sum of:
            0.05460479 = weight(_text_:communities in 1369) [ClassicSimilarity], result of:
              0.05460479 = score(doc=1369,freq=2.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.29306096 = fieldWeight in 1369, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1369)
          0.5 = coord(1/2)
      0.5833333 = coord(7/12)
    
    Abstract
    So far, within the library and information science (LIS) community, knowledge organization (KO) has developed its own very successful solutions to document search, allowing for the classification, indexing and search of millions of books. However, current KO solutions are limited in expressivity as they only support queries by document properties, e.g., by title, author and subject. In parallel, within the artificial intelligence and semantic web communities, knowledge representation (KR) has developed very powerful end expressive techniques, which via the use of ontologies support queries by any entity property (e.g., the properties of the entities described in a document). However, KR has not scaled yet to the level of KO, mainly because of the lack of a precise and scalable entity specification methodology. In this paper we present DERA, a new methodology inspired by the faceted approach, as introduced in KO, that retains all the advantages of KR and compensates for the limitations of KO. DERA guarantees at the same time quality, extensibility, scalability and effectiveness in search.
    Content
    Papers from the ISKO-UK Biennial Conference, "Knowledge Organization: Pushing the Boundaries," United Kingdom, 8-9 July, 2013, London.
  5. Kiren, T.; Shoaib, M.: ¬A novel ontology matching approach using key concepts (2016) 0.06
    0.05855324 = product of:
      0.100376986 = sum of:
        0.014530702 = product of:
          0.043592103 = sum of:
            0.043592103 = weight(_text_:f in 2589) [ClassicSimilarity], result of:
              0.043592103 = score(doc=2589,freq=4.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.31138954 = fieldWeight in 2589, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2589)
          0.33333334 = coord(1/3)
        0.005979372 = weight(_text_:information in 2589) [ClassicSimilarity], result of:
          0.005979372 = score(doc=2589,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.09697737 = fieldWeight in 2589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
        0.01675406 = weight(_text_:for in 2589) [ClassicSimilarity], result of:
          0.01675406 = score(doc=2589,freq=12.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.2540624 = fieldWeight in 2589, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
        0.016731687 = weight(_text_:the in 2589) [ClassicSimilarity], result of:
          0.016731687 = score(doc=2589,freq=24.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.30193105 = fieldWeight in 2589, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
        0.01775283 = weight(_text_:of in 2589) [ClassicSimilarity], result of:
          0.01775283 = score(doc=2589,freq=28.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.32322758 = fieldWeight in 2589, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
        0.016731687 = weight(_text_:the in 2589) [ClassicSimilarity], result of:
          0.016731687 = score(doc=2589,freq=24.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.30193105 = fieldWeight in 2589, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
        0.011896656 = product of:
          0.023793312 = sum of:
            0.023793312 = weight(_text_:22 in 2589) [ClassicSimilarity], result of:
              0.023793312 = score(doc=2589,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.19345059 = fieldWeight in 2589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2589)
          0.5 = coord(1/2)
      0.5833333 = coord(7/12)
    
    Abstract
    Purpose Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many applications like semantic annotation, query answering or ontology integration. Some ontologies may include a large number of entities which make the ontology matching process very complex in terms of the search space and execution time requirements. The purpose of this paper is to present a technique for finding degree of similarity between ontologies that trims down the search space by eliminating the ontology concepts that have less likelihood of being matched. Design/methodology/approach Algorithms are written for finding key concepts, concept matching and relationship matching. WordNet is used for solving synonym problems during the matching process. The technique is evaluated using the reference alignments between ontologies from ontology alignment evaluation initiative benchmark in terms of degree of similarity, Pearson's correlation coefficient and IR measures precision, recall and F-measure. Findings Positive correlation between the degree of similarity and degree of similarity (reference alignment) and computed values of precision, recall and F-measure showed that if only key concepts of ontologies are compared, a time and search space efficient ontology matching system can be developed. Originality/value On the basis of the present novel approach for ontology matching, it is concluded that using key concepts for ontology matching gives comparable results in reduced time and space.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 68(2016) no.1, S.99-111
  6. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.06
    0.058454078 = product of:
      0.116908155 = sum of:
        0.055784404 = product of:
          0.16735321 = sum of:
            0.16735321 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.16735321 = score(doc=400,freq=2.0), product of:
                0.29777196 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035122856 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.007175247 = weight(_text_:information in 400) [ClassicSimilarity], result of:
          0.007175247 = score(doc=400,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.116372846 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.011607553 = weight(_text_:for in 400) [ClassicSimilarity], result of:
          0.011607553 = score(doc=400,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.17601961 = fieldWeight in 400, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.014197307 = weight(_text_:the in 400) [ClassicSimilarity], result of:
          0.014197307 = score(doc=400,freq=12.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.25619698 = fieldWeight in 400, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.013946345 = weight(_text_:of in 400) [ClassicSimilarity], result of:
          0.013946345 = score(doc=400,freq=12.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.25392252 = fieldWeight in 400, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.014197307 = weight(_text_:the in 400) [ClassicSimilarity], result of:
          0.014197307 = score(doc=400,freq=12.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.25619698 = fieldWeight in 400, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(6/12)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Imprint
    Association for Computational Linguistic : Stroudsburg, PA
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  7. Giunchiglia, F.; Villafiorita, A.; Walsh, T.: Theories of abstraction (1997) 0.06
    0.058383077 = product of:
      0.116766155 = sum of:
        0.016439613 = product of:
          0.049318835 = sum of:
            0.049318835 = weight(_text_:f in 4476) [ClassicSimilarity], result of:
              0.049318835 = score(doc=4476,freq=2.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.35229704 = fieldWeight in 4476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.33333334 = coord(1/3)
        0.010943705 = weight(_text_:for in 4476) [ClassicSimilarity], result of:
          0.010943705 = score(doc=4476,freq=2.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.16595288 = fieldWeight in 4476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=4476)
        0.024438193 = weight(_text_:the in 4476) [ClassicSimilarity], result of:
          0.024438193 = score(doc=4476,freq=20.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.44099852 = fieldWeight in 4476, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=4476)
        0.021471804 = weight(_text_:of in 4476) [ClassicSimilarity], result of:
          0.021471804 = score(doc=4476,freq=16.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.39093933 = fieldWeight in 4476, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4476)
        0.024438193 = weight(_text_:the in 4476) [ClassicSimilarity], result of:
          0.024438193 = score(doc=4476,freq=20.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.44099852 = fieldWeight in 4476, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=4476)
        0.019034648 = product of:
          0.038069297 = sum of:
            0.038069297 = weight(_text_:22 in 4476) [ClassicSimilarity], result of:
              0.038069297 = score(doc=4476,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.30952093 = fieldWeight in 4476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    Describes the types of representations used in different theories of abstractions. Shows how the type of mapping between these representations has been increasingly generalised. Discusses desirable properties preserved by such mappings and identifies how these properties are influenced by the mappings and the presentations defined. Surveys programs made in understanding the complexity reduction associated with abstraction. Focuses on formal models of how abstraction reduces the search space. Presents some of the systems that implement abstraction. shows how the efforts in this area have focused on the mechanisation of languages for the declarative representation of abstraction.
    Date
    1.10.2018 14:13:22
  8. Baofu, P.: ¬The future of information architecture : conceiving a better way to understand taxonomy, network, and intelligence (2008) 0.06
    0.057138808 = product of:
      0.114277616 = sum of:
        0.010274758 = product of:
          0.030824272 = sum of:
            0.030824272 = weight(_text_:f in 2257) [ClassicSimilarity], result of:
              0.030824272 = score(doc=2257,freq=2.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.22018565 = fieldWeight in 2257, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2257)
          0.33333334 = coord(1/3)
        0.023917489 = weight(_text_:information in 2257) [ClassicSimilarity], result of:
          0.023917489 = score(doc=2257,freq=32.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.38790947 = fieldWeight in 2257, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2257)
        0.006839816 = weight(_text_:for in 2257) [ClassicSimilarity], result of:
          0.006839816 = score(doc=2257,freq=2.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.103720546 = fieldWeight in 2257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2257)
        0.027746364 = weight(_text_:the in 2257) [ClassicSimilarity], result of:
          0.027746364 = score(doc=2257,freq=66.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.500696 = fieldWeight in 2257, product of:
              8.124039 = tf(freq=66.0), with freq of:
                66.0 = termFreq=66.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2257)
        0.01775283 = weight(_text_:of in 2257) [ClassicSimilarity], result of:
          0.01775283 = score(doc=2257,freq=28.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.32322758 = fieldWeight in 2257, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2257)
        0.027746364 = weight(_text_:the in 2257) [ClassicSimilarity], result of:
          0.027746364 = score(doc=2257,freq=66.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.500696 = fieldWeight in 2257, product of:
              8.124039 = tf(freq=66.0), with freq of:
                66.0 = termFreq=66.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2257)
      0.5 = coord(6/12)
    
    Abstract
    The Future of Information Architecture examines issues surrounding why information is processed, stored and applied in the way that it has, since time immemorial. Contrary to the conventional wisdom held by many scholars in human history, the recurrent debate on the explanation of the most basic categories of information (eg space, time causation, quality, quantity) has been misconstrued, to the effect that there exists some deeper categories and principles behind these categories of information - with enormous implications for our understanding of reality in general. To understand this, the book is organised in to four main parts: Part I begins with the vital question concerning the role of information within the context of the larger theoretical debate in the literature. Part II provides a critical examination of the nature of data taxonomy from the main perspectives of culture, society, nature and the mind. Part III constructively invesitgates the world of information network from the main perspectives of culture, society, nature and the mind. Part IV proposes six main theses in the authors synthetic theory of information architecture, namely, (a) the first thesis on the simpleness-complicatedness principle, (b) the second thesis on the exactness-vagueness principle (c) the third thesis on the slowness-quickness principle (d) the fourth thesis on the order-chaos principle, (e) the fifth thesis on the symmetry-asymmetry principle, and (f) the sixth thesis on the post-human stage.
    LCSH
    Information resources
    Information organization
    Information storage and retrieval systems
    RSWK
    Suchmaschine / Information Retrieval
    Subject
    Information resources
    Information organization
    Information storage and retrieval systems
    Suchmaschine / Information Retrieval
  9. Stuckenschmidt, H.; Harmelen, F van; Waard, A. de; Scerri, T.; Bhogal, R.; Buel, J. van; Crowlesmith, I.; Fluit, C.; Kampman, A.; Broekstra, J.; Mulligen, E. van: Exploring large document repositories with RDF technology : the DOPE project (2004) 0.06
    0.05698861 = product of:
      0.09769477 = sum of:
        0.011624562 = product of:
          0.034873683 = sum of:
            0.034873683 = weight(_text_:f in 762) [ClassicSimilarity], result of:
              0.034873683 = score(doc=762,freq=4.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.24911162 = fieldWeight in 762, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03125 = fieldNorm(doc=762)
          0.33333334 = coord(1/3)
        0.015126749 = weight(_text_:information in 762) [ClassicSimilarity], result of:
          0.015126749 = score(doc=762,freq=20.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.2453355 = fieldWeight in 762, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=762)
        0.010943705 = weight(_text_:for in 762) [ClassicSimilarity], result of:
          0.010943705 = score(doc=762,freq=8.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.16595288 = fieldWeight in 762, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=762)
        0.01338535 = weight(_text_:the in 762) [ClassicSimilarity], result of:
          0.01338535 = score(doc=762,freq=24.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.24154484 = fieldWeight in 762, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=762)
        0.011387143 = weight(_text_:of in 762) [ClassicSimilarity], result of:
          0.011387143 = score(doc=762,freq=18.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.20732687 = fieldWeight in 762, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=762)
        0.01338535 = weight(_text_:the in 762) [ClassicSimilarity], result of:
          0.01338535 = score(doc=762,freq=24.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.24154484 = fieldWeight in 762, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=762)
        0.021841917 = product of:
          0.043683834 = sum of:
            0.043683834 = weight(_text_:communities in 762) [ClassicSimilarity], result of:
              0.043683834 = score(doc=762,freq=2.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.23444878 = fieldWeight in 762, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.03125 = fieldNorm(doc=762)
          0.5 = coord(1/2)
      0.5833333 = coord(7/12)
    
    Abstract
    This thesaurus-based search system uses automatic indexing, RDF-based querying, and concept-based visualization of results to support exploration of large online document repositories. Innovative research institutes rely on the availability of complete and accurate information about new research and development. Information providers such as Elsevier make it their business to provide the required information in a cost-effective way. The Semantic Web will likely contribute significantly to this effort because it facilitates access to an unprecedented quantity of data. The DOPE project (Drug Ontology Project for Elsevier) explores ways to provide access to multiple lifescience information sources through a single interface. With the unremitting growth of scientific information, integrating access to all this information remains an important problem, primarily because the information sources involved are so heterogeneous. Sources might use different syntactic standards (syntactic heterogeneity), organize information in different ways (structural heterogeneity), and even use different terminologies to refer to the same information (semantic heterogeneity). Integrated access hinges on the ability to address these different kinds of heterogeneity. Also, mental models and keywords for accessing data generally diverge between subject areas and communities; hence, many different ontologies have emerged. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. To serve this need, we've developed a thesaurus-based search system that uses automatic indexing, RDF-based querying, and concept-based visualization. We describe here the conversion of an existing proprietary thesaurus to an open standard format, a generic architecture for thesaurus-based information access, an innovative user interface, and results of initial user studies with the resulting DOPE system.
    Content
    Vgl.: Waard, A. de, C. Fluit u. F. van Harmelen: Drug Ontology Project for Elsevier (DOPE). In: http://www.w3.org/2001/sw/sweo/public/UseCases/Elsevier/Elsevier_Aduna_VU.pdf.
  10. Beppler, F.D.; Fonseca, F.T.; Pacheco, R.C.S.: Hermeneus: an architecture for an ontology-enabled information retrieval (2008) 0.06
    0.056967452 = product of:
      0.09765849 = sum of:
        0.012329709 = product of:
          0.036989126 = sum of:
            0.036989126 = weight(_text_:f in 3261) [ClassicSimilarity], result of:
              0.036989126 = score(doc=3261,freq=2.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.26422277 = fieldWeight in 3261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3261)
          0.33333334 = coord(1/3)
        0.016044341 = weight(_text_:information in 3261) [ClassicSimilarity], result of:
          0.016044341 = score(doc=3261,freq=10.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.2602176 = fieldWeight in 3261, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3261)
        0.011607553 = weight(_text_:for in 3261) [ClassicSimilarity], result of:
          0.011607553 = score(doc=3261,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.17601961 = fieldWeight in 3261, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=3261)
        0.015334844 = weight(_text_:the in 3261) [ClassicSimilarity], result of:
          0.015334844 = score(doc=3261,freq=14.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.27672437 = fieldWeight in 3261, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=3261)
        0.012731214 = weight(_text_:of in 3261) [ClassicSimilarity], result of:
          0.012731214 = score(doc=3261,freq=10.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.23179851 = fieldWeight in 3261, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3261)
        0.015334844 = weight(_text_:the in 3261) [ClassicSimilarity], result of:
          0.015334844 = score(doc=3261,freq=14.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.27672437 = fieldWeight in 3261, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=3261)
        0.014275986 = product of:
          0.028551972 = sum of:
            0.028551972 = weight(_text_:22 in 3261) [ClassicSimilarity], result of:
              0.028551972 = score(doc=3261,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.23214069 = fieldWeight in 3261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3261)
          0.5 = coord(1/2)
      0.5833333 = coord(7/12)
    
    Abstract
    Ontologies improve IR systems regarding its retrieval and presentation of information, which make the task of finding information more effective, efficient, and interactive. In this paper we argue that ontologies also greatly improve the engineering of such systems. We created a framework that uses ontology to drive the process of engineering an IR system. We developed a prototype that shows how a domain specialist without knowledge in the IR field can build an IR system with interactive components. The resulting system provides support for users not only to find their information needs but also to extend their state of knowledge. This way, our approach to ontology-enabled information retrieval addresses both the engineering aspect described here and also the usability aspect described elsewhere.
    Date
    28.11.2016 12:43:22
    Source
    http://www.personal.psu.edu/faculty/f/u/fuf1/hermeneus/Hermeneus_architecture.pdf
  11. Rosemblat, G.; Resnick, M.P.; Auston, I.; Shin, D.; Sneiderman, C.; Fizsman, M.; Rindflesch, T.C.: Extending SemRep to the public health domain (2013) 0.06
    0.05681357 = product of:
      0.11362714 = sum of:
        0.016044341 = weight(_text_:information in 2096) [ClassicSimilarity], result of:
          0.016044341 = score(doc=2096,freq=10.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.2602176 = fieldWeight in 2096, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2096)
        0.014216291 = weight(_text_:for in 2096) [ClassicSimilarity], result of:
          0.014216291 = score(doc=2096,freq=6.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.21557912 = fieldWeight in 2096, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2096)
        0.018328644 = weight(_text_:the in 2096) [ClassicSimilarity], result of:
          0.018328644 = score(doc=2096,freq=20.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3307489 = fieldWeight in 2096, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=2096)
        0.013946345 = weight(_text_:of in 2096) [ClassicSimilarity], result of:
          0.013946345 = score(doc=2096,freq=12.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.25392252 = fieldWeight in 2096, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2096)
        0.018328644 = weight(_text_:the in 2096) [ClassicSimilarity], result of:
          0.018328644 = score(doc=2096,freq=20.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3307489 = fieldWeight in 2096, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=2096)
        0.032762878 = product of:
          0.065525755 = sum of:
            0.065525755 = weight(_text_:communities in 2096) [ClassicSimilarity], result of:
              0.065525755 = score(doc=2096,freq=2.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.35167316 = fieldWeight in 2096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2096)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    We describe the use of a domain-independent method to extend a natural language processing (NLP) application, SemRep (Rindflesch, Fiszman, & Libbus, 2005), based on the knowledge sources afforded by the Unified Medical Language System (UMLS®; Humphreys, Lindberg, Schoolman, & Barnett, 1998) to support the area of health promotion within the public health domain. Public health professionals require good information about successful health promotion policies and programs that might be considered for application within their own communities. Our effort seeks to improve access to relevant information for the public health profession, to help those in the field remain an information-savvy workforce. Natural language processing and semantic techniques hold promise to help public health professionals navigate the growing ocean of information by organizing and structuring this knowledge into a focused public health framework paired with a user-friendly visualization application as a way to summarize results of PubMed® searches in this field of knowledge.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.10, S.1963-1974
  12. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.06
    0.05653493 = product of:
      0.11306986 = sum of:
        0.037189603 = product of:
          0.11156881 = sum of:
            0.11156881 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.11156881 = score(doc=5820,freq=2.0), product of:
                0.29777196 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.035122856 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.013529775 = weight(_text_:information in 5820) [ClassicSimilarity], result of:
          0.013529775 = score(doc=5820,freq=16.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.21943474 = fieldWeight in 5820, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.012235435 = weight(_text_:for in 5820) [ClassicSimilarity], result of:
          0.012235435 = score(doc=5820,freq=10.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.18554096 = fieldWeight in 5820, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.017707152 = weight(_text_:the in 5820) [ClassicSimilarity], result of:
          0.017707152 = score(doc=5820,freq=42.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.31953377 = fieldWeight in 5820, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.014700741 = weight(_text_:of in 5820) [ClassicSimilarity], result of:
          0.014700741 = score(doc=5820,freq=30.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.26765788 = fieldWeight in 5820, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.017707152 = weight(_text_:the in 5820) [ClassicSimilarity], result of:
          0.017707152 = score(doc=5820,freq=42.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.31953377 = fieldWeight in 5820, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(6/12)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
    Imprint
    Pittsburgh, PA : Carnegie Mellon University, School of Computer Science, Language Technologies Institute
  13. Jacobs, I.: From chaos, order: W3C standard helps organize knowledge : SKOS Connects Diverse Knowledge Organization Systems to Linked Data (2009) 0.05
    0.054060914 = product of:
      0.10812183 = sum of:
        0.007249604 = weight(_text_:information in 3062) [ClassicSimilarity], result of:
          0.007249604 = score(doc=3062,freq=6.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.11757882 = fieldWeight in 3062, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.015879571 = weight(_text_:for in 3062) [ClassicSimilarity], result of:
          0.015879571 = score(doc=3062,freq=22.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.2408015 = fieldWeight in 3062, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.018518584 = weight(_text_:the in 3062) [ClassicSimilarity], result of:
          0.018518584 = score(doc=3062,freq=60.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.33417642 = fieldWeight in 3062, product of:
              7.745967 = tf(freq=60.0), with freq of:
                60.0 = termFreq=60.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.0148530835 = weight(_text_:of in 3062) [ClassicSimilarity], result of:
          0.0148530835 = score(doc=3062,freq=40.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.2704316 = fieldWeight in 3062, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.018518584 = weight(_text_:the in 3062) [ClassicSimilarity], result of:
          0.018518584 = score(doc=3062,freq=60.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.33417642 = fieldWeight in 3062, product of:
              7.745967 = tf(freq=60.0), with freq of:
                60.0 = termFreq=60.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.0331024 = product of:
          0.0662048 = sum of:
            0.0662048 = weight(_text_:communities in 3062) [ClassicSimilarity], result of:
              0.0662048 = score(doc=3062,freq=6.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.35531756 = fieldWeight in 3062, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3062)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    18 August 2009 -- Today W3C announces a new standard that builds a bridge between the world of knowledge organization systems - including thesauri, classifications, subject headings, taxonomies, and folksonomies - and the linked data community, bringing benefits to both. Libraries, museums, newspapers, government portals, enterprises, social networking applications, and other communities that manage large collections of books, historical artifacts, news reports, business glossaries, blog entries, and other items can now use Simple Knowledge Organization System (SKOS) to leverage the power of linked data. As different communities with expertise and established vocabularies use SKOS to integrate them into the Semantic Web, they increase the value of the information for everyone.
    Content
    SKOS Adapts to the Diversity of Knowledge Organization Systems A useful starting point for understanding the role of SKOS is the set of subject headings published by the US Library of Congress (LOC) for categorizing books, videos, and other library resources. These headings can be used to broaden or narrow queries for discovering resources. For instance, one can narrow a query about books on "Chinese literature" to "Chinese drama," or further still to "Chinese children's plays." Library of Congress subject headings have evolved within a community of practice over a period of decades. By now publishing these subject headings in SKOS, the Library of Congress has made them available to the linked data community, which benefits from a time-tested set of concepts to re-use in their own data. This re-use adds value ("the network effect") to the collection. When people all over the Web re-use the same LOC concept for "Chinese drama," or a concept from some other vocabulary linked to it, this creates many new routes to the discovery of information, and increases the chances that relevant items will be found. As an example of mapping one vocabulary to another, a combined effort from the STITCH, TELplus and MACS Projects provides links between LOC concepts and RAMEAU, a collection of French subject headings used by the Bibliothèque Nationale de France and other institutions. SKOS can be used for subject headings but also many other approaches to organizing knowledge. Because different communities are comfortable with different organization schemes, SKOS is designed to port diverse knowledge organization systems to the Web. "Active participation from the library and information science community in the development of SKOS over the past seven years has been key to ensuring that SKOS meets a variety of needs," said Thomas Baker, co-chair of the Semantic Web Deployment Working Group, which published SKOS. "One goal in creating SKOS was to provide new uses for well-established knowledge organization systems by providing a bridge to the linked data cloud." SKOS is part of the Semantic Web technology stack. Like the Web Ontology Language (OWL), SKOS can be used to define vocabularies. But the two technologies were designed to meet different needs. SKOS is a simple language with just a few features, tuned for sharing and linking knowledge organization systems such as thesauri and classification schemes. OWL offers a general and powerful framework for knowledge representation, where additional "rigor" can afford additional benefits (for instance, business rule processing). To get started with SKOS, see the SKOS Primer.
  14. Aparecida Moura, M.: Emerging discursive formations, folksonomy and social semantic information spaces (SSIS) : the contributions of the theory of integrative levels in the studies carried out by the Classification Research Group (CRG) (2014) 0.05
    0.053773813 = product of:
      0.107547626 = sum of:
        0.008456109 = weight(_text_:information in 1395) [ClassicSimilarity], result of:
          0.008456109 = score(doc=1395,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.13714671 = fieldWeight in 1395, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1395)
        0.006839816 = weight(_text_:for in 1395) [ClassicSimilarity], result of:
          0.006839816 = score(doc=1395,freq=2.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.103720546 = fieldWeight in 1395, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1395)
        0.02213394 = weight(_text_:the in 1395) [ClassicSimilarity], result of:
          0.02213394 = score(doc=1395,freq=42.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.39941722 = fieldWeight in 1395, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1395)
        0.02068142 = weight(_text_:of in 1395) [ClassicSimilarity], result of:
          0.02068142 = score(doc=1395,freq=38.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.37654874 = fieldWeight in 1395, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1395)
        0.02213394 = weight(_text_:the in 1395) [ClassicSimilarity], result of:
          0.02213394 = score(doc=1395,freq=42.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.39941722 = fieldWeight in 1395, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1395)
        0.027302396 = product of:
          0.05460479 = sum of:
            0.05460479 = weight(_text_:communities in 1395) [ClassicSimilarity], result of:
              0.05460479 = score(doc=1395,freq=2.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.29306096 = fieldWeight in 1395, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1395)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    This paper focuses on the discursive formations emerging from the Social Semantic Information Spaces (SSIS) in light of the concept of emergence in the theory of integrative levels. The study aims to identify the opportunities and challenges of incorporating epistemological considerations in the act of acquiring knowledge into the consolidation of knowledge organization and mediation processes and devices in the emergence of phenomena. The goal was to analyze the effects of that concept on the actions of a sample of researchers registered in an emerging research domain in SSIS in order to understand this type of indexing done by the users and communities as a classification of integrating levels. The methodology was established by triangulation through social network analysis, consensus analysis and archaeology of knowledge. It was possible to conclude that there is a collective effort to settle a semantic interoperability model for the labeling of contents based on best practices regarding the description of the objects shared in SSIS.
    Footnote
    Papers from I Congress of ISKO Spain and Portugal / XI Congress ISKO Spain, 7-9 November 2013, University of Porto.
  15. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.05
    0.05374994 = product of:
      0.10749988 = sum of:
        0.011958744 = weight(_text_:information in 633) [ClassicSimilarity], result of:
          0.011958744 = score(doc=633,freq=8.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.19395474 = fieldWeight in 633, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.01184691 = weight(_text_:for in 633) [ClassicSimilarity], result of:
          0.01184691 = score(doc=633,freq=6.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.17964928 = fieldWeight in 633, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.025558075 = weight(_text_:the in 633) [ClassicSimilarity], result of:
          0.025558075 = score(doc=633,freq=56.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.4612073 = fieldWeight in 633, product of:
              7.483315 = tf(freq=56.0), with freq of:
                56.0 = termFreq=56.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.02068142 = weight(_text_:of in 633) [ClassicSimilarity], result of:
          0.02068142 = score(doc=633,freq=38.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.37654874 = fieldWeight in 633, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.025558075 = weight(_text_:the in 633) [ClassicSimilarity], result of:
          0.025558075 = score(doc=633,freq=56.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.4612073 = fieldWeight in 633, product of:
              7.483315 = tf(freq=56.0), with freq of:
                56.0 = termFreq=56.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.011896656 = product of:
          0.023793312 = sum of:
            0.023793312 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
              0.023793312 = score(doc=633,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.19345059 = fieldWeight in 633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=633)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
  16. Miles, A.; Matthews, B.; Beckett, D.; Brickley, D.; Wilson, M.; Rogers, N.: SKOS: A language to describe simple knowledge structures for the web (2005) 0.05
    0.05256987 = product of:
      0.10513974 = sum of:
        0.004185561 = weight(_text_:information in 517) [ClassicSimilarity], result of:
          0.004185561 = score(doc=517,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.06788416 = fieldWeight in 517, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=517)
        0.019151485 = weight(_text_:for in 517) [ClassicSimilarity], result of:
          0.019151485 = score(doc=517,freq=32.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.29041752 = fieldWeight in 517, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.02734375 = fieldNorm(doc=517)
        0.01621478 = weight(_text_:the in 517) [ClassicSimilarity], result of:
          0.01621478 = score(doc=517,freq=46.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.29260322 = fieldWeight in 517, product of:
              6.78233 = tf(freq=46.0), with freq of:
                46.0 = termFreq=46.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.02734375 = fieldNorm(doc=517)
        0.016270736 = weight(_text_:of in 517) [ClassicSimilarity], result of:
          0.016270736 = score(doc=517,freq=48.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.29624295 = fieldWeight in 517, product of:
              6.928203 = tf(freq=48.0), with freq of:
                48.0 = termFreq=48.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=517)
        0.01621478 = weight(_text_:the in 517) [ClassicSimilarity], result of:
          0.01621478 = score(doc=517,freq=46.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.29260322 = fieldWeight in 517, product of:
              6.78233 = tf(freq=46.0), with freq of:
                46.0 = termFreq=46.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.02734375 = fieldNorm(doc=517)
        0.0331024 = product of:
          0.0662048 = sum of:
            0.0662048 = weight(_text_:communities in 517) [ClassicSimilarity], result of:
              0.0662048 = score(doc=517,freq=6.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.35531756 = fieldWeight in 517, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    The paper presents an introduction to W3C's Simple Knowledge Organisation System (SKOS) , an RDF Schema designed to represent and share controlled vocabularies, such as classifications, glossaries, and thesauri, more simply than ontology languages.
    Content
    "Textual content-based search engines for the web have a number of limitations. Firstly, many web resources have little or no textual content (images, audio or video streams etc.) Secondly, precision is low where natural language terms have overloaded meaning (e.g. 'bank', 'watch', 'chip' etc.) Thirdly, recall is incomplete where the search does not take account of synonyms or quasi-synonyms. Fourthly, there is no basis for assisting a user in modifying (expanding, refining, translating) a search based on the meaning of the original search. Fifthly, there is no basis for searching across natural languages, or framing search queries in terms of symbolic languages. The Semantic Web is a framework for creating, managing, publishing and searching semantically rich metadata for web resources. Annotating web resources with precise and meaningful statements about conceptual aspects of their content provides a basis for overcoming all of the limitations of textual content-based search engines listed above. Creating this type of metadata requires that metadata generators are able to refer to shared repositories of meaning: 'vocabularies' of concepts that are common to a community, and describe the domain of interest for that community.
    This type of effort is common in the digital library community, where a group of experts will interact with a user community to create a thesaurus for a specific domain (e.g. the Art & Architecture Thesaurus AAT AAT) or an overarching classification scheme (e.g. the Dewey Decimal Classification). A similar type of activity is being undertaken more recently in a less centralised manner by web communities, producing for example the DMOZ web directory DMOZ, or the Topic Exchange for weblog topics Topic Exchange. The web, including the semantic web, provides a medium within which communities can interact and collaboratively build and use vocabularies of concepts. A simple language is required that allows these communities to express the structure and content of their vocabularies in a machine-understandable way, enabling exchange and reuse. The Resource Description Framework (RDF) is an ideal language for making statements about web resources and publishing metadata. However, RDF provides only the low level semantics required to form metadata statements. RDF vocabularies must be built on top of RDF to support the expression of more specific types of information within metadata. Ontology languages such as OWL OWL add a layer of expressive power to RDF, and provide powerful tools for defining complex conceptual structures, which can be used to generate rich metadata. However, the class-oriented, logically precise modelling required to construct useful web ontologies is demanding in terms of expertise, effort, and therefore cost. In many cases this type of modelling may be superfluous or unsuited to requirements. Therefore there is a need for a language for expressing vocabularies of concepts for use in semantically rich metadata, that is powerful enough to support semantically enhanced search, but simple enough to be undemanding in terms of the cost and expertise required to use it."
    Footnote
    XTech 2005: XML, the Web and beyond.
  17. Eito-Brun, R.: Ontologies and the exchange of technical information : building a knowledge repository based on ECSS standards (2014) 0.05
    0.052546635 = product of:
      0.10509327 = sum of:
        0.01789821 = weight(_text_:information in 1436) [ClassicSimilarity], result of:
          0.01789821 = score(doc=1436,freq=28.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.29028487 = fieldWeight in 1436, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.012235435 = weight(_text_:for in 1436) [ClassicSimilarity], result of:
          0.012235435 = score(doc=1436,freq=10.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.18554096 = fieldWeight in 1436, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.023819406 = weight(_text_:the in 1436) [ClassicSimilarity], result of:
          0.023819406 = score(doc=1436,freq=76.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.42983222 = fieldWeight in 1436, product of:
              8.717798 = tf(freq=76.0), with freq of:
                76.0 = termFreq=76.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.01780348 = weight(_text_:of in 1436) [ClassicSimilarity], result of:
          0.01780348 = score(doc=1436,freq=44.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.3241498 = fieldWeight in 1436, product of:
              6.6332498 = tf(freq=44.0), with freq of:
                44.0 = termFreq=44.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.023819406 = weight(_text_:the in 1436) [ClassicSimilarity], result of:
          0.023819406 = score(doc=1436,freq=76.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.42983222 = fieldWeight in 1436, product of:
              8.717798 = tf(freq=76.0), with freq of:
                76.0 = termFreq=76.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=1436)
        0.009517324 = product of:
          0.019034648 = sum of:
            0.019034648 = weight(_text_:22 in 1436) [ClassicSimilarity], result of:
              0.019034648 = score(doc=1436,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.15476047 = fieldWeight in 1436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1436)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    The development of complex projects in the aerospace industry is based on the collaboration of geographically distributed teams and companies. In this context, the need of sharing different types of data and information is a key factor to assure the successful execution of the projects. In the case of European projects, the ECSS standards provide a normative framework that specifies, among other requirements, the different document types, information items and artifacts that need to be generated. The specification of the characteristics of these information items are usually incorporated as annex to the different ECSS standards, and they provide the intended purpose, scope, and structure of the documents and information items. In these standards, documents or deliverables should not be considered as independent items, but as the results of packaging different information artifacts for their delivery between the involved parties. Successful information integration and knowledge exchange cannot be based exclusively on the conceptual definition of information types. It also requires the definition of methods and techniques for serializing and exchanging these documents and artifacts. This area is not covered by ECSS standards, and the definition of these data schemas would improve the opportunity for improving collaboration processes among companies. This paper describes the development of an OWL-based ontology to manage the different artifacts and information items requested in the European Space Agency (ESA) ECSS standards for SW development. The ECSS set of standards is the main reference in aerospace projects in Europe, and in addition to engineering and managerial requirements they provide a set of DRD (Document Requirements Documents) with the structure of the different documents and records necessary to manage projects and describe intermediate information products and final deliverables. Information integration is a must-have in aerospace projects, where different players need to collaborate and share data during the life cycle of the products about requirements, design elements, problems, etc. The proposed ontology provides the basis for building advanced information systems where the information coming from different companies and institutions can be integrated into a coherent set of related data. It also provides a conceptual framework to enable the development of interfaces and gateways between the different tools and information systems used by the different players in aerospace projects.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  18. Fonseca, F.: ¬The double role of ontologies in information science research (2007) 0.05
    0.050751302 = product of:
      0.101502605 = sum of:
        0.012329709 = product of:
          0.036989126 = sum of:
            0.036989126 = weight(_text_:f in 277) [ClassicSimilarity], result of:
              0.036989126 = score(doc=277,freq=2.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.26422277 = fieldWeight in 277, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=277)
          0.33333334 = coord(1/3)
        0.020294663 = weight(_text_:information in 277) [ClassicSimilarity], result of:
          0.020294663 = score(doc=277,freq=16.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.3291521 = fieldWeight in 277, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=277)
        0.014216291 = weight(_text_:for in 277) [ClassicSimilarity], result of:
          0.014216291 = score(doc=277,freq=6.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.21557912 = fieldWeight in 277, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=277)
        0.018328644 = weight(_text_:the in 277) [ClassicSimilarity], result of:
          0.018328644 = score(doc=277,freq=20.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3307489 = fieldWeight in 277, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=277)
        0.018004656 = weight(_text_:of in 277) [ClassicSimilarity], result of:
          0.018004656 = score(doc=277,freq=20.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.32781258 = fieldWeight in 277, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=277)
        0.018328644 = weight(_text_:the in 277) [ClassicSimilarity], result of:
          0.018328644 = score(doc=277,freq=20.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3307489 = fieldWeight in 277, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=277)
      0.5 = coord(6/12)
    
    Abstract
    In philosophy, Ontology is the basic description of things in the world. In information science, an ontology refers to an engineering artifact, constituted by a specific vocabulary used to describe a certain reality. Ontologies have been proposed for validating both conceptual models and conceptual schemas; however, these roles are quite dissimilar. In this article, we show that ontologies can be better understood if we classify the different uses of the term as it appears in the literature. First, we explain Ontology (upper case O) as used in Philosophy. Then, we propose a differentiation between ontologies of information systems and ontologies for information systems. All three concepts have an important role in information science. We clarify the different meanings and uses of Ontology and ontologies through a comparison of research by Wand and Weber and by Guarino in ontology-driven information systems. The contributions of this article are twofold: (a) It provides a better understanding of what ontologies are, and (b) it explains the double role of ontologies in information science research.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.6, S.786-793
  19. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.05
    0.049943082 = product of:
      0.099886164 = sum of:
        0.018718397 = weight(_text_:information in 4329) [ClassicSimilarity], result of:
          0.018718397 = score(doc=4329,freq=10.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.3035872 = fieldWeight in 4329, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
        0.013343699 = weight(_text_:und in 4329) [ClassicSimilarity], result of:
          0.013343699 = score(doc=4329,freq=2.0), product of:
            0.07784514 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.035122856 = queryNorm
            0.17141339 = fieldWeight in 4329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
        0.021412011 = weight(_text_:for in 4329) [ClassicSimilarity], result of:
          0.021412011 = score(doc=4329,freq=10.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.3246967 = fieldWeight in 4329, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
        0.016563525 = weight(_text_:the in 4329) [ClassicSimilarity], result of:
          0.016563525 = score(doc=4329,freq=12.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.2988965 = fieldWeight in 4329, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
        0.013285002 = weight(_text_:of in 4329) [ClassicSimilarity], result of:
          0.013285002 = score(doc=4329,freq=8.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.24188137 = fieldWeight in 4329, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
        0.016563525 = weight(_text_:the in 4329) [ClassicSimilarity], result of:
          0.016563525 = score(doc=4329,freq=12.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.2988965 = fieldWeight in 4329, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
      0.5 = coord(6/12)
    
    Abstract
    Plenty of contemporary attempts to search exist that are associated with the area of Semantic Web. But which of them qualify as information retrieval for the Semantic Web? Do such approaches exist? To answer these questions we take a look at the nature of the Semantic Web and Semantic Desktop and at definitions for information and data retrieval. We survey current approaches referred to by their authors as information retrieval for the Semantic Web or that use Semantic Web technology for search.
    Content
    Enthält einen Überblick über Modelle, Systeme und Projekte
    Source
    Lernen - Wissen - Adaption : workshop proceedings / LWA 2007, Halle, September 2007. Martin Luther University Halle-Wittenberg, Institute for Informatics, Databases and Information Systems. Hrsg.: Alexander Hinneburg
  20. Kruk, S.R.; Kruk, E.; Stankiewicz, K.: Evaluation of semantic and social technologies for digital libraries (2009) 0.05
    0.049251467 = product of:
      0.098502934 = sum of:
        0.010147331 = weight(_text_:information in 3387) [ClassicSimilarity], result of:
          0.010147331 = score(doc=3387,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.16457605 = fieldWeight in 3387, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.011607553 = weight(_text_:for in 3387) [ClassicSimilarity], result of:
          0.011607553 = score(doc=3387,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.17601961 = fieldWeight in 3387, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.023184106 = weight(_text_:the in 3387) [ClassicSimilarity], result of:
          0.023184106 = score(doc=3387,freq=32.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.41836792 = fieldWeight in 3387, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.016103853 = weight(_text_:of in 3387) [ClassicSimilarity], result of:
          0.016103853 = score(doc=3387,freq=16.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.2932045 = fieldWeight in 3387, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.023184106 = weight(_text_:the in 3387) [ClassicSimilarity], result of:
          0.023184106 = score(doc=3387,freq=32.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.41836792 = fieldWeight in 3387, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.014275986 = product of:
          0.028551972 = sum of:
            0.028551972 = weight(_text_:22 in 3387) [ClassicSimilarity], result of:
              0.028551972 = score(doc=3387,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.23214069 = fieldWeight in 3387, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3387)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    Libraries are the tools we use to learn and to answer our questions. The quality of our work depends, among others, on the quality of the tools we use. Recent research in digital libraries is focused, on one hand on improving the infrastructure of the digital library management systems (DLMS), and on the other on improving the metadata models used to annotate collections of objects maintained by DLMS. The latter includes, among others, the semantic web and social networking technologies. Recently, the semantic web and social networking technologies are being introduced to the digital libraries domain. The expected outcome is that the overall quality of information discovery in digital libraries can be improved by employing social and semantic technologies. In this chapter we present the results of an evaluation of social and semantic end-user information discovery services for the digital libraries.
    Date
    1. 8.2010 12:35:22

Years

Languages

  • e 451
  • d 123
  • pt 4
  • f 1
  • sp 1
  • More… Less…

Types

  • a 414
  • el 165
  • m 38
  • x 35
  • n 15
  • s 15
  • r 9
  • p 7
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications