Search (74 results, page 1 of 4)

  • × theme_ss:"Wissensrepräsentation"
  1. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.03
    0.030721527 = product of:
      0.092164576 = sum of:
        0.092164576 = sum of:
          0.054590914 = weight(_text_:project in 2623) [ClassicSimilarity], result of:
            0.054590914 = score(doc=2623,freq=2.0), product of:
              0.19509704 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.04622078 = queryNorm
              0.27981415 = fieldWeight in 2623, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.046875 = fieldNorm(doc=2623)
          0.03757366 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
            0.03757366 = score(doc=2623,freq=2.0), product of:
              0.16185729 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04622078 = queryNorm
              0.23214069 = fieldWeight in 2623, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2623)
      0.33333334 = coord(1/3)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.03
    0.025601273 = product of:
      0.07680382 = sum of:
        0.07680382 = sum of:
          0.045492426 = weight(_text_:project in 633) [ClassicSimilarity], result of:
            0.045492426 = score(doc=633,freq=2.0), product of:
              0.19509704 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.04622078 = queryNorm
              0.23317845 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
          0.03131139 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
            0.03131139 = score(doc=633,freq=2.0), product of:
              0.16185729 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04622078 = queryNorm
              0.19345059 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
      0.33333334 = coord(1/3)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
  3. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.02
    0.024470285 = product of:
      0.073410854 = sum of:
        0.073410854 = product of:
          0.22023255 = sum of:
            0.22023255 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.22023255 = score(doc=400,freq=2.0), product of:
                0.39186028 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04622078 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  4. Green, R.: Relationships in the Dewey Decimal Classification (DDC) : plan of study (2008) 0.02
    0.024262628 = product of:
      0.07278788 = sum of:
        0.07278788 = product of:
          0.14557576 = sum of:
            0.14557576 = weight(_text_:project in 3397) [ClassicSimilarity], result of:
              0.14557576 = score(doc=3397,freq=8.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.74617106 = fieldWeight in 3397, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3397)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    EPC Exhibit 129-36.1 presented intermediate results of a project to connect Relative Index terms to topics associated with classes and to determine if those Relative Index terms approximated the whole of the corresponding class or were in standing room in the class. The Relative Index project constitutes the first stage of a long(er)-term project to instill a more systematic treatment of relationships within the DDC. The present exhibit sets out a plan of study for that long-term project.
  5. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.016313523 = product of:
      0.048940565 = sum of:
        0.048940565 = product of:
          0.1468217 = sum of:
            0.1468217 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.1468217 = score(doc=701,freq=2.0), product of:
                0.39186028 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04622078 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  6. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.02
    0.016313523 = product of:
      0.048940565 = sum of:
        0.048940565 = product of:
          0.1468217 = sum of:
            0.1468217 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.1468217 = score(doc=5820,freq=2.0), product of:
                0.39186028 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04622078 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  7. Stuckenschmidt, H.; Harmelen, F van; Waard, A. de; Scerri, T.; Bhogal, R.; Buel, J. van; Crowlesmith, I.; Fluit, C.; Kampman, A.; Broekstra, J.; Mulligen, E. van: Exploring large document repositories with RDF technology : the DOPE project (2004) 0.01
    0.012131314 = product of:
      0.03639394 = sum of:
        0.03639394 = product of:
          0.07278788 = sum of:
            0.07278788 = weight(_text_:project in 762) [ClassicSimilarity], result of:
              0.07278788 = score(doc=762,freq=8.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.37308553 = fieldWeight in 762, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.03125 = fieldNorm(doc=762)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This thesaurus-based search system uses automatic indexing, RDF-based querying, and concept-based visualization of results to support exploration of large online document repositories. Innovative research institutes rely on the availability of complete and accurate information about new research and development. Information providers such as Elsevier make it their business to provide the required information in a cost-effective way. The Semantic Web will likely contribute significantly to this effort because it facilitates access to an unprecedented quantity of data. The DOPE project (Drug Ontology Project for Elsevier) explores ways to provide access to multiple lifescience information sources through a single interface. With the unremitting growth of scientific information, integrating access to all this information remains an important problem, primarily because the information sources involved are so heterogeneous. Sources might use different syntactic standards (syntactic heterogeneity), organize information in different ways (structural heterogeneity), and even use different terminologies to refer to the same information (semantic heterogeneity). Integrated access hinges on the ability to address these different kinds of heterogeneity. Also, mental models and keywords for accessing data generally diverge between subject areas and communities; hence, many different ontologies have emerged. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. To serve this need, we've developed a thesaurus-based search system that uses automatic indexing, RDF-based querying, and concept-based visualization. We describe here the conversion of an existing proprietary thesaurus to an open standard format, a generic architecture for thesaurus-based information access, an innovative user interface, and results of initial user studies with the resulting DOPE system.
    Content
    Vgl.: Waard, A. de, C. Fluit u. F. van Harmelen: Drug Ontology Project for Elsevier (DOPE). In: http://www.w3.org/2001/sw/sweo/public/UseCases/Elsevier/Elsevier_Aduna_VU.pdf.
  8. Haslhofer, B.; Knezevié, P.: ¬The BRICKS digital library infrastructure (2009) 0.01
    0.010722669 = product of:
      0.032168005 = sum of:
        0.032168005 = product of:
          0.06433601 = sum of:
            0.06433601 = weight(_text_:project in 3384) [ClassicSimilarity], result of:
              0.06433601 = score(doc=3384,freq=4.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.32976416 = fieldWeight in 3384, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3384)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Service-oriented architectures, and the wider acceptance of decentralized peer-to-peer architectures enable the transition from integrated, centrally controlled systems to federated and dynamic configurable systems. The benefits for the individual service providers and users are robustness of the system, independence of central authorities and flexibility in the usage of services. This chapter provides details of the European project BRICKS, which aims at enabling integrated access to distributed resources in the Cultural Heritage domain. The target audience is broad and heterogeneous and involves cultural heritage and educational institutions, the research community, industry, and the general public. The project idea is motivated by the fact that the amount of digital information and digitized content is continuously increasing but still much effort has to be expended to discover and access it. The reasons for such a situation are heterogeneous data formats, restricted access, proprietary access interfaces, etc. Typical usage scenarios are integrated queries among several knowledge resource, e.g. to discover all Italian artifacts from the Renaissance in European museums. Another example is to follow the life cycle of historic documents, whose physical copies are distributed all over Europe. A standard method for integrated access is to place all available content and metadata in a central place. Unfortunately, such a solution requires a quite powerful and costly infrastructure if the volume of data is large. Considerations of cost optimization are highly important for Cultural Heritage institutions, especially if they are funded from public money. Therefore, better usage of the existing resources, i.e. a decentralized/P2P approach promises to deliver a significantly less costly system,and does not mean sacrificing too much on the performance side.
  9. Waard, A. de; Fluit, C.; Harmelen, F. van: Drug Ontology Project for Elsevier (DOPE) (2007) 0.01
    0.010506028 = product of:
      0.031518083 = sum of:
        0.031518083 = product of:
          0.063036166 = sum of:
            0.063036166 = weight(_text_:project in 758) [ClassicSimilarity], result of:
              0.063036166 = score(doc=758,freq=6.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.32310158 = fieldWeight in 758, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.03125 = fieldNorm(doc=758)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Innovative research institutes rely on the availability of complete and accurate information about new research and development, and it is the business of information providers such as Elsevier to provide the required information in a cost-effective way. It is very likely that the semantic web will make an important contribution to this effort, since it facilitates access to an unprecedented quantity of data. However, with the unremitting growth of scientific information, integrating access to all this information remains a significant problem, not least because of the heterogeneity of the information sources involved - sources which may use different syntactic standards (syntactic heterogeneity), organize information in very different ways (structural heterogeneity) and even use different terminologies to refer to the same information (semantic heterogeneity). The ability to address these different kinds of heterogeneity is the key to integrated access. Thesauri have already proven to be a core technology to effective information access as they provide controlled vocabularies for indexing information, and thereby help to overcome some of the problems of free-text search by relating and grouping relevant terms in a specific domain. However, currently there is no open architecture which supports the use of these thesauri for querying other data sources. For example, when we move from the centralized and controlled use of EMTREE within EMBASE.com to a distributed setting, it becomes crucial to improve access to the thesaurus by means of a standardized representation using open data standards that allow for semantic qualifications. In general, mental models and keywords for accessing data diverge between subject areas and communities, and so many different ontologies have been developed. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. The aim of the DOPE project (Drug Ontology Project for Elsevier) is to investigate the possibility of providing access to multiple information sources in the area of life science through a single interface.
  10. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.01
    0.01043713 = product of:
      0.03131139 = sum of:
        0.03131139 = product of:
          0.06262278 = sum of:
            0.06262278 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.06262278 = score(doc=6089,freq=2.0), product of:
                0.16185729 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04622078 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Pages
    S.11-22
  11. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.01
    0.01043713 = product of:
      0.03131139 = sum of:
        0.03131139 = product of:
          0.06262278 = sum of:
            0.06262278 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.06262278 = score(doc=5576,freq=2.0), product of:
                0.16185729 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04622078 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13.12.2017 14:17:22
  12. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.01
    0.01043713 = product of:
      0.03131139 = sum of:
        0.03131139 = product of:
          0.06262278 = sum of:
            0.06262278 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.06262278 = score(doc=539,freq=2.0), product of:
                0.16185729 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04622078 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    26.12.2011 13:22:07
  13. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.01043713 = product of:
      0.03131139 = sum of:
        0.03131139 = product of:
          0.06262278 = sum of:
            0.06262278 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.06262278 = score(doc=3406,freq=2.0), product of:
                0.16185729 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04622078 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    30. 5.2010 16:22:35
  14. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.01
    0.01043713 = product of:
      0.03131139 = sum of:
        0.03131139 = product of:
          0.06262278 = sum of:
            0.06262278 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
              0.06262278 = score(doc=4523,freq=2.0), product of:
                0.16185729 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04622078 = queryNorm
                0.38690117 = fieldWeight in 4523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4523)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
  15. Tomassen, S.L.: Research on ontology-driven information retrieval (2006 (?)) 0.01
    0.009098486 = product of:
      0.027295457 = sum of:
        0.027295457 = product of:
          0.054590914 = sum of:
            0.054590914 = weight(_text_:project in 4328) [ClassicSimilarity], result of:
              0.054590914 = score(doc=4328,freq=2.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.27981415 = fieldWeight in 4328, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4328)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    An increasing number of recent information retrieval systems make use of ontologies to help the users clarify their information needs and come up with semantic representations of documents. A particular concern here is the integration of these semantic approaches with traditional search technology. The research presented in this paper examines how ontologies can be efficiently applied to large-scale search systems for the web. We describe how these systems can be enriched with adapted ontologies to provide both an in-depth understanding of the user's needs as well as an easy integration with standard vector-space retrieval systems. The ontology concepts are adapted to the domain terminology by computing a feature vector for each concept. Later, the feature vectors are used to enrich a provided query. The whole retrieval system is under development as part of a larger Semantic Web standardization project for the Norwegian oil & gas sector.
  16. Sure, Y.; Studer, R.: ¬A methodology for ontology-based knowledge management (2004) 0.01
    0.009098486 = product of:
      0.027295457 = sum of:
        0.027295457 = product of:
          0.054590914 = sum of:
            0.054590914 = weight(_text_:project in 4400) [ClassicSimilarity], result of:
              0.054590914 = score(doc=4400,freq=2.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.27981415 = fieldWeight in 4400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4400)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Ontologies are a core element of the knowledge management architecture described in Chapter 1. In this chapter we describe a methodology for application driven ontology development, covering the whole project lifecycle from the kick off phase to the maintenance phase. Existing methodologies and practical ontology development experiences have in common that they start from the identification of the purpose of the ontology and the need for domain knowledge acquisition. They differ in their foci and following steps to be taken. In our approach of the ontology development process, we integrate aspects from existing methodologies and lessons learned from practical experience (as described in the Section 3.7). We put ontology development into a wider organizational context by performing an a priori feasibility study. The feasibility study is based on CommonKADS. We modified certain aspects of CommonKADS for a tight integration of the feasibility study into our methodology.
  17. Kiryakov, A.; Simov, K.; Ognyanov, D.: Ontology middleware and reasoning (2004) 0.01
    0.009098486 = product of:
      0.027295457 = sum of:
        0.027295457 = product of:
          0.054590914 = sum of:
            0.054590914 = weight(_text_:project in 4410) [ClassicSimilarity], result of:
              0.054590914 = score(doc=4410,freq=2.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.27981415 = fieldWeight in 4410, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4410)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The ontology middleware discussed in this chapter can be seen as 'administrative' software infrastructure that makes the rest of the modules in a knowledge management toolset easier to integrate into real-world applications. The central issue is to make the methodology and modules available to society as a self-sufficient platform with mature support for development, management, maintenance, and use of middle-size and large knowledge bases. This chapter starts with an explanation of the required features of ontology middleware in the context of our knowledge management architecture and the terminology used In Section 11.2 the problem of versioning and tracking change is discussed. Section 11.3 presents the versioning model and its implementation that is developed in the project, and Section 11.4 describes the functionality of the instance reasoning module.
  18. Schreiber, G.; Amin, A.; Assem, M. van; Boer, V. de; Hardman, L.; Hildebrand, M.; Hollink, L.; Huang, Z.; Kersen, J. van; Niet, M. de; Omelayenko, B.; Ossenbruggen, J. van; Siebes, R.; Taekema, J.; Wielemaker, J.; Wielinga, B.: MultimediaN E-Culture demonstrator (2006) 0.01
    0.009098486 = product of:
      0.027295457 = sum of:
        0.027295457 = product of:
          0.054590914 = sum of:
            0.054590914 = weight(_text_:project in 4648) [ClassicSimilarity], result of:
              0.054590914 = score(doc=4648,freq=2.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.27981415 = fieldWeight in 4648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4648)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The main objective of the MultimediaN E-Culture project is to demonstrate how novel semantic-web and presentation technologies can be deployed to provide better indexing and search support within large virtual collections of culturalheritage resources. The architecture is fully based on open web standards in particular XML, SVG, RDF/OWL and SPARQL. One basic hypothesis underlying this work is that the use of explicit background knowledge in the form of ontologies/vocabularies/thesauri is in particular useful in information retrieval in knowledge-rich domains. This paper gives some details about the internals of the demonstrator.
  19. Nelson, S.J.; Powell, T.; Srinivasan, S.; Humphreys, B.L.: Unified Medical Language System® (UMLS®) Project (2009) 0.01
    0.009098486 = product of:
      0.027295457 = sum of:
        0.027295457 = product of:
          0.054590914 = sum of:
            0.054590914 = weight(_text_:project in 4701) [ClassicSimilarity], result of:
              0.054590914 = score(doc=4701,freq=2.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.27981415 = fieldWeight in 4701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4701)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  20. Wright, L.W.; Nardini, H.K.G.; Aronson, A.R.; Rindflesch, T.C.: Hierarchical concept indexing of full-text documents in the Unified Medical Language System Information sources Map (1999) 0.01
    0.009098486 = product of:
      0.027295457 = sum of:
        0.027295457 = product of:
          0.054590914 = sum of:
            0.054590914 = weight(_text_:project in 2111) [ClassicSimilarity], result of:
              0.054590914 = score(doc=2111,freq=2.0), product of:
                0.19509704 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.04622078 = queryNorm
                0.27981415 = fieldWeight in 2111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2111)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Full-text documents are a vital and rapidly growing part of online biomedical information. A single large document can contain as much information as a small database, but normally lacks the tight structure and consistent indexing of a database. Retrieval systems will often miss highly relevant parts of a document if the document as a whole appears irrelevant. Access to full-text information is further complicated by the need to search separately many disparate information resources. This research explores how these problems can be addressed by the combined use of 2 techniques: 1) natural language processing for automatic concept-based indexing of full text, and 2) methods for exploiting the structure and hierarchy of full-text documents. We describe methods for applying these techniques to a large collection of full-text documents drawn from the Health Services / Technology Assessment Text (HSTAT) database at the NLM and examine how this hierarchical concept indexing can assist both document- and source-level retrieval in the context of NLM's Information Source Map project

Authors

Years

Languages

  • e 63
  • d 11

Types

  • a 56
  • el 16
  • x 6
  • m 2
  • n 1
  • p 1
  • r 1
  • More… Less…