Search (79 results, page 1 of 4)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.12
    0.12347858 = sum of:
      0.053415764 = product of:
        0.21366306 = sum of:
          0.21366306 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
            0.21366306 = score(doc=400,freq=2.0), product of:
              0.38017118 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.044842023 = queryNorm
              0.56201804 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.25 = coord(1/4)
      0.070062816 = product of:
        0.14012563 = sum of:
          0.14012563 = weight(_text_:e.g in 400) [ClassicSimilarity], result of:
            0.14012563 = score(doc=400,freq=6.0), product of:
              0.23393378 = queryWeight, product of:
                5.2168427 = idf(docFreq=651, maxDocs=44218)
                0.044842023 = queryNorm
              0.598997 = fieldWeight in 400, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.2168427 = idf(docFreq=651, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.5 = coord(1/2)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. SKOS2OWL : Online tool for deriving OWL ontologies from SKOS categorization schemas (2007) 0.03
    0.03370899 = product of:
      0.06741798 = sum of:
        0.06741798 = product of:
          0.13483596 = sum of:
            0.13483596 = weight(_text_:e.g in 4691) [ClassicSimilarity], result of:
              0.13483596 = score(doc=4691,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.57638514 = fieldWeight in 4691, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4691)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    SKOS2OWL is an online tool that converts hierarchical classifications available in the W3C SKOS (Simple Knowledge Organization Systems) format into RDF-S or OWL ontologies. In many cases, the resulting ontologies can be used directly. If not, they can be refined using standard ontology engineering tools like e.g. Protégé.
  3. Bast, H.; Bäurle, F.; Buchhold, B.; Haussmann, E.: Broccoli: semantic full-text search at your fingertips (2012) 0.03
    0.03370899 = product of:
      0.06741798 = sum of:
        0.06741798 = product of:
          0.13483596 = sum of:
            0.13483596 = weight(_text_:e.g in 704) [ClassicSimilarity], result of:
              0.13483596 = score(doc=704,freq=8.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.57638514 = fieldWeight in 704, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=704)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present Broccoli, a fast and easy-to-use search engine forwhat we call semantic full-text search. Semantic full-textsearch combines the capabilities of standard full-text searchand ontology search. The search operates on four kinds ofobjects: ordinary words (e.g., edible), classes (e.g., plants), instances (e.g.,Broccoli), and relations (e.g., occurs-with or native-to). Queries are trees, where nodes are arbitrary bags of these objects, and arcs are relations. The user interface guides the user in incrementally constructing such trees by instant (search-as-you-type) suggestions of words, classes, instances, or relations that lead to good hits. Both standard full-text search and pure ontology search are included as special cases. In this paper, we describe the query language of Broccoli, a new kind of index that enables fast processing of queries from that language as well as fast query suggestion, the natural language processing required, and the user interface. We evaluated query times and result quality on the full version of the English Wikipedia (32 GB XML dump) combined with the YAGO ontology (26 million facts). We have implemented a fully functional prototype based on our ideas, see: http://broccoli.informatik.uni-freiburg.de.
  4. Khoo, C.S.G.; Zhang, D.; Wang, M.; Yun, X.J.: Subject organization in three types of information resources : an exploratory study (2012) 0.03
    0.03370899 = product of:
      0.06741798 = sum of:
        0.06741798 = product of:
          0.13483596 = sum of:
            0.13483596 = weight(_text_:e.g in 831) [ClassicSimilarity], result of:
              0.13483596 = score(doc=831,freq=8.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.57638514 = fieldWeight in 831, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=831)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge tends to be structured differently in different types of information resources and information genres due to the different purposes of the resource/genre, and the characteristics of the media or format of the resource. This study investigates subject organization in three types of information resources: books (i.e. monographs), Web directories and information websites that provide information on particular subjects. Twelve subjects (topics) were selected in the areas of science, arts/humanities and social science, and two books, two Web directories and two information websites were sampled for each subject. The top two levels of the hierarchical subject organization in each resource were harvested and analyzed. Books have the highest proportion of general subject categories (e.g. history, theory and definition) and process categories (indicating step-by-step instructions). Information websites have the highest proportion of target user categories and genre-specific categories (e.g. about us and contact us), whereas Web directories have the highest proportion of specialty categories (i.e. sub-disciplines), industry-role categories (e.g. stores, schools and associations) and format categories (e.g. books, blogs and videos). Some disciplinary differences were also identified.
  5. Mestrovic, A.; Cali, A.: ¬An ontology-based approach to information retrieval (2017) 0.03
    0.029192839 = product of:
      0.058385678 = sum of:
        0.058385678 = product of:
          0.116771355 = sum of:
            0.116771355 = weight(_text_:e.g in 3489) [ClassicSimilarity], result of:
              0.116771355 = score(doc=3489,freq=6.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.49916416 = fieldWeight in 3489, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3489)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We define a general framework for ontology-based information retrieval (IR). In our approach, document and query expansion rely on a base taxonomy that is extracted from a lexical database or a Linked Data set (e.g. WordNet, Wiktionary etc.). Each term from a document or query is modelled as a vector of base concepts from the base taxonomy. We define a set of mapping functions which map multiple ontological layers (dimensions) onto the base taxonomy. This way, each concept from the included ontologies can also be represented as a vector of base concepts from the base taxonomy. We propose a general weighting schema which is used for the vector space model. Our framework can therefore take into account various lexical and semantic relations between terms and concepts (e.g. synonymy, hierarchy, meronymy, antonymy, geo-proximity, etc.). This allows us to avoid certain vocabulary problems (e.g. synonymy, polysemy) as well as to reduce the vector size in the IR tasks.
  6. Wunner, T.; Buitelaar, P.; O'Riain, S.: Semantic, terminological and linguistic interpretation of XBRL (2010) 0.03
    0.028603025 = product of:
      0.05720605 = sum of:
        0.05720605 = product of:
          0.1144121 = sum of:
            0.1144121 = weight(_text_:e.g in 1122) [ClassicSimilarity], result of:
              0.1144121 = score(doc=1122,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.489079 = fieldWeight in 1122, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1122)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Standardization efforts in fnancial reporting have led to large numbers of machine-interpretable vocabularies that attempt to model complex accounting practices in XBRL (eXtended Business Reporting Language). Because reporting agencies do not require fine-grained semantic and terminological representations, these vocabularies cannot be easily reused. Ontology-based Information Extraction, in particular, requires much greater semantic and terminological structure, and the introduction of a linguistic structure currently absent from XBRL. In order to facilitate such reuse, we propose a three-faceted methodology that analyzes and enriches the XBRL vocabulary: (1) transform semantic structure by analyzing the semantic relationships between terms (e.g. taxonomic, meronymic); (2) enhance terminological structure by using several domain-specific (XBRL), domain-related (SAPTerm, etc.) and domain-independent (GoogleDefine, Wikipedia, etc.) terminologies; and (3) add linguistic structure at term level (e.g. part-of-speech, morphology, syntactic arguments). This paper outlines a first experiment towards implementing this methodology on the International Financial Reporting Standard XBRL vocabulary.
  7. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.02
    0.023835853 = product of:
      0.047671705 = sum of:
        0.047671705 = product of:
          0.09534341 = sum of:
            0.09534341 = weight(_text_:e.g in 4705) [ClassicSimilarity], result of:
              0.09534341 = score(doc=4705,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40756583 = fieldWeight in 4705, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4705)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Companies, governmental agencies and scientists produce a large amount of quantitative (research) data, consisting of measurements ranging from e.g. the surface temperatures of an ocean to the viscosity of a sample of mayonnaise. Such measurements are stored in tables in e.g. spreadsheet files and research reports. To integrate and reuse such data, it is necessary to have a semantic description of the data. However, the notation used is often ambiguous, making automatic interpretation and conversion to RDF or other suitable format diffiult. For example, the table header cell "f(Hz)" refers to frequency measured in Hertz, but the symbol "f" can also refer to the unit farad or the quantities force or luminous flux. Current annotation tools for this task either work on less ambiguous data or perform a more limited task. We introduce new disambiguation strategies based on an ontology, which allows to improve performance on "sloppy" datasets not yet targeted by existing systems.
  8. Aker, A.; Plaza, L.; Lloret, E.; Gaizauskas, R.: Do humans have conceptual models about geographic objects? : a user study (2013) 0.02
    0.023835853 = product of:
      0.047671705 = sum of:
        0.047671705 = product of:
          0.09534341 = sum of:
            0.09534341 = weight(_text_:e.g in 680) [ClassicSimilarity], result of:
              0.09534341 = score(doc=680,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40756583 = fieldWeight in 680, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=680)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we investigate what sorts of information humans request about geographical objects of the same type. For example, Edinburgh Castle and Bodiam Castle are two objects of the same type: "castle." The question is whether specific information is requested for the object type "castle" and how this information differs for objects of other types (e.g., church, museum, or lake). We aim to answer this question using an online survey. In the survey, we showed 184 participants 200 images pertaining to urban and rural objects and asked them to write questions for which they would like to know the answers when seeing those objects. Our analysis of the 6,169 questions collected in the survey shows that humans have shared ideas of what to ask about geographical objects. When the object types resemble each other (e.g., church and temple), the requested information is similar for the objects of these types. Otherwise, the information is specific to an object type. Our results may be very useful in guiding Natural Language Processing tasks involving automatic generation of templates for image descriptions and their assessment, as well as image indexing and organization.
  9. Giunchiglia, F.; Dutta, B.; Maltese, V.: From knowledge organization to knowledge representation (2014) 0.02
    0.023835853 = product of:
      0.047671705 = sum of:
        0.047671705 = product of:
          0.09534341 = sum of:
            0.09534341 = weight(_text_:e.g in 1369) [ClassicSimilarity], result of:
              0.09534341 = score(doc=1369,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40756583 = fieldWeight in 1369, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1369)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    So far, within the library and information science (LIS) community, knowledge organization (KO) has developed its own very successful solutions to document search, allowing for the classification, indexing and search of millions of books. However, current KO solutions are limited in expressivity as they only support queries by document properties, e.g., by title, author and subject. In parallel, within the artificial intelligence and semantic web communities, knowledge representation (KR) has developed very powerful end expressive techniques, which via the use of ontologies support queries by any entity property (e.g., the properties of the entities described in a document). However, KR has not scaled yet to the level of KO, mainly because of the lack of a precise and scalable entity specification methodology. In this paper we present DERA, a new methodology inspired by the faceted approach, as introduced in KO, that retains all the advantages of KR and compensates for the limitations of KO. DERA guarantees at the same time quality, extensibility, scalability and effectiveness in search.
  10. Jansen, L.: Four rules for classifying social entities (2014) 0.02
    0.023835853 = product of:
      0.047671705 = sum of:
        0.047671705 = product of:
          0.09534341 = sum of:
            0.09534341 = weight(_text_:e.g in 3409) [ClassicSimilarity], result of:
              0.09534341 = score(doc=3409,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40756583 = fieldWeight in 3409, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3409)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many top-level ontologies like Basic Formal Ontology (BFO) have been developed as a framework for ontologies in the natural sciences. The aim of the present essay is to extend the account of BFO to a very special layer of reality, the world of social entities. While natural entities like bacteria, thunderstorms or temperatures exist independently from human action and thought, social entities like countries, hospitals or money come into being only through human collective intentions and collective actions. Recently, the regional ontology of the social world has attracted considerable research interest in philosophy - witness, e.g., the pioneering work by Gilbert, Tuomela and Searle. There is a considerable class of phenomena that require the participation of more than one human agent: nobody can tango alone, play tennis against oneself, or set up a parliamentary democracy for oneself. Through cooperation and coordination of their wills and actions, agents can act together - they can perform social actions and group actions. An important kind of social action is the establishment of an institution (e.g. a hospital, a research agency or a marriage) through mutual promise or (social) contract. Another important kind of social action is the imposition of a social status on certain entities. For example, a society can impose the status of being a 20 Euro note on certain pieces of paper or the status of being an approved medication to a certain chemical substance.
  11. Miles, A.; Pérez-Agüera, J.R.: SKOS: Simple Knowledge Organisation for the Web (2006) 0.02
    0.02359629 = product of:
      0.04719258 = sum of:
        0.04719258 = product of:
          0.09438516 = sum of:
            0.09438516 = weight(_text_:e.g in 504) [ClassicSimilarity], result of:
              0.09438516 = score(doc=504,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40346956 = fieldWeight in 504, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=504)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article introduces the Simple Knowledge Organisation System (SKOS), a Semantic Web language for representing controlled structured vocabularies, including thesauri, classification schemes, subject heading systems and taxonomies. SKOS provides a framework for publishing thesauri, classification schemes, and subject indexes on the Web, and for applying these systems to resource collections that are part of the SemanticWeb. SemanticWeb applications may harvest and merge SKOS data, to integrate and enhances retrieval service across multiple collections (e.g. libraries). This article also describes some alternatives for integrating Semantic Web services based on the Resource Description Framework (RDF) and SKOS into a distributed enterprise architecture.
  12. Green, R.: WordNet (2009) 0.02
    0.02359629 = product of:
      0.04719258 = sum of:
        0.04719258 = product of:
          0.09438516 = sum of:
            0.09438516 = weight(_text_:e.g in 4696) [ClassicSimilarity], result of:
              0.09438516 = score(doc=4696,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40346956 = fieldWeight in 4696, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4696)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    WordNet, a lexical database for English, is organized around semantic and lexical relationships between synsets, concepts represented by sets of synonymous word senses. Offering reasonably comprehensive coverage of the nouns, verbs, adjectives, and adverbs of general English, WordNet is a widely used resource for dealing with the ambiguity that arises from homonymy, polysemy, and synonymy. WordNet is used in many information-related tasks and applications (e.g., word sense disambiguation, semantic similarity, lexical chaining, alignment of parallel corpora, text segmentation, sentiment and subjectivity analysis, text classification, information retrieval, text summarization, question answering, information extraction, and machine translation).
  13. Miles, A.; Matthews, B.; Beckett, D.; Brickley, D.; Wilson, M.; Rogers, N.: SKOS: A language to describe simple knowledge structures for the web (2005) 0.02
    0.020434987 = product of:
      0.040869974 = sum of:
        0.040869974 = product of:
          0.08173995 = sum of:
            0.08173995 = weight(_text_:e.g in 517) [ClassicSimilarity], result of:
              0.08173995 = score(doc=517,freq=6.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.3494149 = fieldWeight in 517, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Textual content-based search engines for the web have a number of limitations. Firstly, many web resources have little or no textual content (images, audio or video streams etc.) Secondly, precision is low where natural language terms have overloaded meaning (e.g. 'bank', 'watch', 'chip' etc.) Thirdly, recall is incomplete where the search does not take account of synonyms or quasi-synonyms. Fourthly, there is no basis for assisting a user in modifying (expanding, refining, translating) a search based on the meaning of the original search. Fifthly, there is no basis for searching across natural languages, or framing search queries in terms of symbolic languages. The Semantic Web is a framework for creating, managing, publishing and searching semantically rich metadata for web resources. Annotating web resources with precise and meaningful statements about conceptual aspects of their content provides a basis for overcoming all of the limitations of textual content-based search engines listed above. Creating this type of metadata requires that metadata generators are able to refer to shared repositories of meaning: 'vocabularies' of concepts that are common to a community, and describe the domain of interest for that community.
    This type of effort is common in the digital library community, where a group of experts will interact with a user community to create a thesaurus for a specific domain (e.g. the Art & Architecture Thesaurus AAT AAT) or an overarching classification scheme (e.g. the Dewey Decimal Classification). A similar type of activity is being undertaken more recently in a less centralised manner by web communities, producing for example the DMOZ web directory DMOZ, or the Topic Exchange for weblog topics Topic Exchange. The web, including the semantic web, provides a medium within which communities can interact and collaboratively build and use vocabularies of concepts. A simple language is required that allows these communities to express the structure and content of their vocabularies in a machine-understandable way, enabling exchange and reuse. The Resource Description Framework (RDF) is an ideal language for making statements about web resources and publishing metadata. However, RDF provides only the low level semantics required to form metadata statements. RDF vocabularies must be built on top of RDF to support the expression of more specific types of information within metadata. Ontology languages such as OWL OWL add a layer of expressive power to RDF, and provide powerful tools for defining complex conceptual structures, which can be used to generate rich metadata. However, the class-oriented, logically precise modelling required to construct useful web ontologies is demanding in terms of expertise, effort, and therefore cost. In many cases this type of modelling may be superfluous or unsuited to requirements. Therefore there is a need for a language for expressing vocabularies of concepts for use in semantically rich metadata, that is powerful enough to support semantically enhanced search, but simple enough to be undemanding in terms of the cost and expertise required to use it."
  14. Widhalm, R.; Mueck, T.A.: Merging topics in well-formed XML topic maps (2003) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 2186) [ClassicSimilarity], result of:
              0.08090157 = score(doc=2186,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 2186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2186)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Topic Maps are a standardized modelling approach for the semantic annotation and description of WWW resources. They enable an improved search and navigational access on information objects stored in semi-structured information spaces like the WWW. However, the according standards ISO 13250 and XTM (XML Topic Maps) lack formal semantics, several questions concerning e.g. subclassing, inheritance or merging of topics are left open. The proposed TMUML meta model, directly derived from the well known UML meta model, is a meta model for Topic Maps which enables semantic constraints to be formulated in OCL (object constraint language) in order to answer such open questions and overcome possible inconsistencies in Topic Map repositories. We will examine the XTM merging conditions and show, in several examples, how the TMUML meta model enables semantic constraints for Topic Map merging to be formulated in OCL. Finally, we will show how the TM validation process, i.e., checking if a Topic Map is well formed, includes our merging conditions.
  15. Fluit, C.; Horst, H. ter; Meer, J. van der; Sabou, M.; Mika, P.: Spectacle (2004) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 4337) [ClassicSimilarity], result of:
              0.08090157 = score(doc=4337,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 4337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many Semantic Web initiatives improve the capabilities of machines to exchange the meaning of information with other machines. These efforts lead to an increased quality of the application's results, but their user interfaces take little or no advantage of the semantic richness. For example, an ontology-based search engine will use its ontology when evaluating the user's query (e.g. for query formulation, disambiguation or evaluation), but fails to use it to significantly enrich the presentation of the results to a human user. For example, one could imagine replacing the endless list of hits with a structured presentation based on the semantic properties of the hits. Another problem is that the modelling of a domain is done from a single perspective (most often that of the information provider). Therefore, presentation based on the resulting ontology is unlikely to satisfy the needs of all the different types of users of the information. So even assuming an ontology for the domain is in place, mapping that ontology to the needs of individual users - based on their tasks, expertise and personal preferences - is not trivial.
  16. Hajibayova, L.; Jacob, E.K.: ¬A theoretical framework for operationalizing basic level categories in knowledge organization research (2012) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 830) [ClassicSimilarity], result of:
              0.08090157 = score(doc=830,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=830)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Research on categories indicates that superordinate categories lack informativeness because they are represented by only a few attributes while subordinate categories lack cognitive economy because they are represented by too many attributes (e.g., Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). Basic level categories balance informativeness and cognitive economy: They represent the most attributes common to category members and the fewest attributes shared across categories. Green (2006) has suggested that the universality of basic level categories can be used for building crosswalks between classificatory systems. However, studies of basic level categories in KO systems have assumed that the notion of a basic level category is understood and have failed to operationalize the notion of "basic level category" before applying it in the analysis of user-generated vocabularies. Heidegger's (1953/1996) notion of handiness (i.e., zuhandenheit, or being "at hand" can provide a framework for understanding the unstable and relational nature of basic level categories and for operationalizing basic level categories in KO research.
  17. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.017805256 = product of:
      0.035610512 = sum of:
        0.035610512 = product of:
          0.14244205 = sum of:
            0.14244205 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.14244205 = score(doc=701,freq=2.0), product of:
                0.38017118 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044842023 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  18. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.02
    0.017805256 = product of:
      0.035610512 = sum of:
        0.035610512 = product of:
          0.14244205 = sum of:
            0.14244205 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.14244205 = score(doc=5820,freq=2.0), product of:
                0.38017118 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044842023 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  19. Breslin, J.G.: Social semantic information spaces (2009) 0.02
    0.016854495 = product of:
      0.03370899 = sum of:
        0.03370899 = product of:
          0.06741798 = sum of:
            0.06741798 = weight(_text_:e.g in 3377) [ClassicSimilarity], result of:
              0.06741798 = score(doc=3377,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.28819257 = fieldWeight in 3377, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3377)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The structural and syntactic web put in place in the early 90s is still much the same as what we use today: resources (web pages, files, etc.) connected by untyped hyperlinks. By untyped, we mean that there is no easy way for a computer to figure out what a link between two pages means - for example, on the W3C website, there are hundreds of links to the various organisations that are registered members of the association, but there is nothing explicitly saying that the link is to an organisation that is a "member of" the W3C or what type of organisation is represented by the link. On John's work page, he links to many papers he has written, but it does not explicitly say that he is the author of those papers or that he wrote such-and-such when he was working at a particular university. In fact, the Web was envisaged to be much more, as one can see from the image in Fig. 1 which is taken from Tim Berners Lee's original outline for the Web in 1989, entitled "Information Management: A Proposal". In this, all the resources are connected by links describing the type of relationships, e.g. "wrote", "describe", "refers to", etc. This is a precursor to the Semantic Web which we will come back to later.
  20. Haslhofer, B.; Knezevié, P.: ¬The BRICKS digital library infrastructure (2009) 0.02
    0.016854495 = product of:
      0.03370899 = sum of:
        0.03370899 = product of:
          0.06741798 = sum of:
            0.06741798 = weight(_text_:e.g in 3384) [ClassicSimilarity], result of:
              0.06741798 = score(doc=3384,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.28819257 = fieldWeight in 3384, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3384)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Service-oriented architectures, and the wider acceptance of decentralized peer-to-peer architectures enable the transition from integrated, centrally controlled systems to federated and dynamic configurable systems. The benefits for the individual service providers and users are robustness of the system, independence of central authorities and flexibility in the usage of services. This chapter provides details of the European project BRICKS, which aims at enabling integrated access to distributed resources in the Cultural Heritage domain. The target audience is broad and heterogeneous and involves cultural heritage and educational institutions, the research community, industry, and the general public. The project idea is motivated by the fact that the amount of digital information and digitized content is continuously increasing but still much effort has to be expended to discover and access it. The reasons for such a situation are heterogeneous data formats, restricted access, proprietary access interfaces, etc. Typical usage scenarios are integrated queries among several knowledge resource, e.g. to discover all Italian artifacts from the Renaissance in European museums. Another example is to follow the life cycle of historic documents, whose physical copies are distributed all over Europe. A standard method for integrated access is to place all available content and metadata in a central place. Unfortunately, such a solution requires a quite powerful and costly infrastructure if the volume of data is large. Considerations of cost optimization are highly important for Cultural Heritage institutions, especially if they are funded from public money. Therefore, better usage of the existing resources, i.e. a decentralized/P2P approach promises to deliver a significantly less costly system,and does not mean sacrificing too much on the performance side.

Years

Languages

  • e 68
  • d 11

Types

  • a 62
  • el 18
  • x 5
  • m 2
  • n 1
  • r 1
  • More… Less…