Search (6 results, page 1 of 1)

  • × year_i:[2020 TO 2030}
  • × theme_ss:"Wissensrepräsentation"
  1. Silva, S.E.; Reis, L.P.; Fernandes, J.M.; Sester Pereira, A.D.: ¬A multi-layer framework for semantic modeling (2020) 0.05
    0.05010998 = product of:
      0.07516497 = sum of:
        0.04648775 = weight(_text_:search in 5712) [ClassicSimilarity], result of:
          0.04648775 = score(doc=5712,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.2660511 = fieldWeight in 5712, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=5712)
        0.028677218 = product of:
          0.057354435 = sum of:
            0.057354435 = weight(_text_:engines in 5712) [ClassicSimilarity], result of:
              0.057354435 = score(doc=5712,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.22454272 = fieldWeight in 5712, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5712)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to introduce a multi-level framework for semantic modeling (MFSM) based on four signification levels: objects, classes of entities, instances and domains. In addition, four fundamental propositions of the signification process underpin these levels, namely, classification, decomposition, instantiation and contextualization. Design/methodology/approach The deductive approach guided the design of this modeling framework. The authors empirically validated the MFSM in two ways. First, the authors identified the signification processes used in articles that deal with semantic modeling. The authors then applied the MFSM to model the semantic context of the literature about lean manufacturing, a field of management science. Findings The MFSM presents a highly consistent approach about the signification process, integrates the semantic modeling literature in a new and comprehensive view; and permits the modeling of any semantic context, thus facilitating the development of knowledge organization systems based on semantic search. Research limitations/implications The use of MFSM is manual and, thus, requires a considerable effort of the team that decides to model a semantic context. In this paper, the modeling was generated by specialists, and in the future should be applicated to lay users. Practical implications The MFSM opens up avenues to a new form of classification of documents, as well as for the development of tools based on the semantic search, and to investigate how users do their searches. Social implications The MFSM can be used to model archives semantically in public or private settings. In future, it can be incorporated to search engines for more efficient searches of users. Originality/value The MFSM provides a new and comprehensive approach about the elementary levels and activities in the process of signification. In addition, this new framework presents a new form to model semantically any context classifying its objects.
  2. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.01
    0.013419857 = product of:
      0.04025957 = sum of:
        0.04025957 = weight(_text_:search in 5365) [ClassicSimilarity], result of:
          0.04025957 = score(doc=5365,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.230407 = fieldWeight in 5365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
      0.33333334 = coord(1/3)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  3. MacFarlane, A.; Missaoui, S.; Frankowska-Takhari, S.: On machine learning and knowledge organization in multimedia information retrieval (2020) 0.01
    0.011183213 = product of:
      0.03354964 = sum of:
        0.03354964 = weight(_text_:search in 5732) [ClassicSimilarity], result of:
          0.03354964 = score(doc=5732,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.19200584 = fieldWeight in 5732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5732)
      0.33333334 = coord(1/3)
    
    Abstract
    Recent technological developments have increased the use of machine learning to solve many problems, including many in information retrieval. Multimedia information retrieval as a problem represents a significant challenge to machine learning as a technological solution, but some problems can still be addressed by using appropriate AI techniques. We review the technological developments and provide a perspective on the use of machine learning in conjunction with knowledge organization to address multimedia IR needs. The semantic gap in multimedia IR remains a significant problem in the field, and solutions to them are many years off. However, new technological developments allow the use of knowledge organization and machine learning in multimedia search systems and services. Specifically, we argue that, the improvement of detection of some classes of lowlevel features in images music and video can be used in conjunction with knowledge organization to tag or label multimedia content for better retrieval performance. We provide an overview of the use of knowledge organization schemes in machine learning and make recommendations to information professionals on the use of this technology with knowledge organization techniques to solve multimedia IR problems. We introduce a five-step process model that extracts features from multimedia objects (Step 1) from both knowledge organization (Step 1a) and machine learning (Step 1b), merging them together (Step 2) to create an index of those multimedia objects (Step 3). We also overview further steps in creating an application to utilize the multimedia objects (Step 4) and maintaining and updating the database of features on those objects (Step 5).
  4. Hauff-Hartig, S.: Wissensrepräsentation durch RDF: Drei angewandte Forschungsbeispiele : Bitte recht vielfältig: Wie Wissensgraphen, Disco und FaBiO Struktur in Mangas und die Humanities bringen (2021) 0.01
    0.009081715 = product of:
      0.027245143 = sum of:
        0.027245143 = product of:
          0.054490287 = sum of:
            0.054490287 = weight(_text_:22 in 318) [ClassicSimilarity], result of:
              0.054490287 = score(doc=318,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.30952093 = fieldWeight in 318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=318)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 5.2021 12:43:05
  5. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.01
    0.0056760716 = product of:
      0.017028214 = sum of:
        0.017028214 = product of:
          0.03405643 = sum of:
            0.03405643 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
              0.03405643 = score(doc=106,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.19345059 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2021 14:24:32
  6. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.00
    0.0045408574 = product of:
      0.013622572 = sum of:
        0.013622572 = product of:
          0.027245143 = sum of:
            0.027245143 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
              0.027245143 = score(doc=179,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.15476047 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2015 18:30:22