Search (87 results, page 1 of 5)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  1. Bettencourt, N.; Silva, N.; Barroso, J.: Semantically enhancing recommender systems (2016) 0.10
    0.10332343 = product of:
      0.15498514 = sum of:
        0.12775399 = weight(_text_:resources in 3374) [ClassicSimilarity], result of:
          0.12775399 = score(doc=3374,freq=16.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.68443835 = fieldWeight in 3374, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.046875 = fieldNorm(doc=3374)
        0.027231153 = product of:
          0.054462306 = sum of:
            0.054462306 = weight(_text_:management in 3374) [ClassicSimilarity], result of:
              0.054462306 = score(doc=3374,freq=4.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.31599492 = fieldWeight in 3374, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3374)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    As the amount of content and the number of users in social relationships is continually growing in the Internet, resource sharing and access policy management is difficult, time-consuming and error-prone. Cross-domain recommendation of private or protected resources managed and secured by each domain's specific access rules is impracticable due to private security policies and poor sharing mechanisms. This work focus on exploiting resource's content, user's preferences, users' social networks and semantic information to cross-relate different resources through their meta information using recommendation techniques that combine collaborative-filtering techniques with semantics annotations, by generating associations between resources. The semantic similarities established between resources are used on a hybrid recommendation engine that interprets user and resources' semantic information. The recommendation engine allows the promotion and discovery of unknown-unknown resources to users that could not even know about the existence of those resources thus providing means to solve the cross-domain recommendation of private or protected resources.
    Source
    Knowledge discovery, knowledge engineering and knowledge management: 7th International Joint Conference, IC3K 2015, Lisbon, Portugal, November 12-14, 2015, Revised Selected Papers. Eds.: A. Fred et al
  2. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.05
    0.05129568 = product of:
      0.07694352 = sum of:
        0.052695833 = weight(_text_:resources in 1026) [ClassicSimilarity], result of:
          0.052695833 = score(doc=1026,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.28231642 = fieldWeight in 1026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.02424768 = product of:
          0.04849536 = sum of:
            0.04849536 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
              0.04849536 = score(doc=1026,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.2708308 = fieldWeight in 1026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1026)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
  3. Jun, W.: ¬A knowledge network constructed by integrating classification, thesaurus and metadata in a digital library (2003) 0.04
    0.040492512 = product of:
      0.060738765 = sum of:
        0.042584665 = weight(_text_:resources in 1254) [ClassicSimilarity], result of:
          0.042584665 = score(doc=1254,freq=4.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.22814612 = fieldWeight in 1254, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.03125 = fieldNorm(doc=1254)
        0.018154101 = product of:
          0.036308203 = sum of:
            0.036308203 = weight(_text_:management in 1254) [ClassicSimilarity], result of:
              0.036308203 = score(doc=1254,freq=4.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.21066327 = fieldWeight in 1254, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1254)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Knowledge management in digital libraries is a universal problem. Keyword-based searching is applied everywhere no matter whether the resources are indexed databases or full-text Web pages. In keyword matching, the valuable content description and indexing of the metadata, such as the subject descriptors and the classification notations, are merely treated as common keywords to be matched with the user query. Without the support of vocabulary control tools, such as classification systems and thesauri, the intelligent labor of content analysis, description and indexing in metadata production are seriously wasted. New retrieval paradigms are needed to exploit the potential of the metadata resources. Could classification and thesauri, which contain the condensed intelligence of generations of librarians, be used in a digital library to organize the networked information, especially metadata, to facilitate their usability and change the digital library into a knowledge management environment? To examine that question, we designed and implemented a new paradigm that incorporates a classification system, a thesaurus and metadata. The classification and the thesaurus are merged into a concept network, and the metadata are distributed into the nodes of the concept network according to their subjects. The abstract concept node instantiated with the related metadata records becomes a knowledge node. A coherent and consistent knowledge network is thus formed. It is not only a framework for resource organization but also a structure for knowledge navigation, retrieval and learning. We have built an experimental system based on the Chinese Classification and Thesaurus, which is the most comprehensive and authoritative in China, and we have incorporated more than 5000 bibliographic records in the computing domain from the Peking University Library. The result is encouraging. In this article, we review the tools, the architecture and the implementation of our experimental system, which is called Vision.
  4. Sacco, G.M.: Dynamic taxonomies and guided searches (2006) 0.04
    0.037837304 = product of:
      0.113511905 = sum of:
        0.113511905 = sum of:
          0.044929106 = weight(_text_:management in 5295) [ClassicSimilarity], result of:
            0.044929106 = score(doc=5295,freq=2.0), product of:
              0.17235184 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.051133685 = queryNorm
              0.2606825 = fieldWeight in 5295, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5295)
          0.068582796 = weight(_text_:22 in 5295) [ClassicSimilarity], result of:
            0.068582796 = score(doc=5295,freq=4.0), product of:
              0.17906146 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051133685 = queryNorm
              0.38301262 = fieldWeight in 5295, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5295)
      0.33333334 = coord(1/3)
    
    Abstract
    A new search paradigm, in which the primary user activity is the guided exploration of a complex information space rather than the retrieval of items based on precise specifications, is proposed. The author claims that this paradigm is the norm in most practical applications, and that solutions based on traditional search methods are not effective in this context. He then presents a solution based on dynamic taxonomies, a knowledge management model that effectively guides users to reach their goal while giving them total freedom in exploring the information base. Applications, benefits, and current research are discussed.
    Date
    22. 7.2006 17:56:22
  5. Quiroga, L.M.; Mostafa, J.: ¬An experiment in building profiles in information filtering : the role of context of user relevance feedback (2002) 0.04
    0.035790663 = product of:
      0.053685993 = sum of:
        0.037639882 = weight(_text_:resources in 2579) [ClassicSimilarity], result of:
          0.037639882 = score(doc=2579,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.20165458 = fieldWeight in 2579, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2579)
        0.016046109 = product of:
          0.032092217 = sum of:
            0.032092217 = weight(_text_:management in 2579) [ClassicSimilarity], result of:
              0.032092217 = score(doc=2579,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.18620178 = fieldWeight in 2579, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2579)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    An experiment was conducted to see how relevance feedback could be used to build and adjust profiles to improve the performance of filtering systems. Data was collected during the system interaction of 18 graduate students with SIFTER (Smart Information Filtering Technology for Electronic Resources), a filtering system that ranks incoming information based on users' profiles. The data set came from a collection of 6000 records concerning consumer health. In the first phase of the study, three different modes of profile acquisition were compared. The explicit mode allowed users to directly specify the profile; the implicit mode utilized relevance feedback to create and refine the profile; and the combined mode allowed users to initialize the profile and to continuously refine it using relevance feedback. Filtering performance, measured in terms of Normalized Precision, showed that the three approaches were significantly different ( [small alpha, Greek] =0.05 and p =0.012). The explicit mode of profile acquisition consistently produced superior results. Exclusive reliance on relevance feedback in the implicit mode resulted in inferior performance. The low performance obtained by the implicit acquisition mode motivated the second phase of the study, which aimed to clarify the role of context in relevance feedback judgments. An inductive content analysis of thinking aloud protocols showed dimensions that were highly situational, establishing the importance context plays in feedback relevance assessments. Results suggest the need for better representation of documents, profiles, and relevance feedback mechanisms that incorporate dimensions identified in this research.
    Source
    Information processing and management. 38(2002) no.5, S.671-694
  6. Wolfram, D.; Xie, H.I.: Traditional IR for web users : a context for general audience digital libraries (2002) 0.04
    0.035790663 = product of:
      0.053685993 = sum of:
        0.037639882 = weight(_text_:resources in 2589) [ClassicSimilarity], result of:
          0.037639882 = score(doc=2589,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.20165458 = fieldWeight in 2589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
        0.016046109 = product of:
          0.032092217 = sum of:
            0.032092217 = weight(_text_:management in 2589) [ClassicSimilarity], result of:
              0.032092217 = score(doc=2589,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.18620178 = fieldWeight in 2589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2589)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The emergence of general audience digital libraries (GADLs) defines a context that represents a hybrid of both "traditional" IR, using primarily bibliographic resources provided by database vendors, and "popular" IR, exemplified by public search systems available on the World Wide Web. Findings of a study investigating end-user searching and response to a GADL are reported. Data collected from a Web-based end-user survey and data logs of resource usage for a Web-based GADL were analyzed for user characteristics, patterns of access and use, and user feedback. Cross-tabulations using respondent demographics revealed several key differences in how the system was used and valued by users of different age groups. Older users valued the service more than younger users and engaged in different searching and viewing behaviors. The GADL more closely resembles traditional retrieval systems in terms of content and purpose of use, but is more similar to popular IR systems in terms of user behavior and accessibility. A model that defines the dual context of the GADL environment is derived from the data analysis and existing IR models in general and other specific contexts. The authors demonstrate the distinguishing characteristics of this IR context, and discuss implications for the development and evaluation of future GADLs to accommodate a variety of user needs and expectations.
    Source
    Information processing and management. 38(2002) no.5, S.627-648
  7. Qu, R.; Fang, Y.; Bai, W.; Jiang, Y.: Computing semantic similarity based on novel models of semantic representation using Wikipedia (2018) 0.04
    0.035790663 = product of:
      0.053685993 = sum of:
        0.037639882 = weight(_text_:resources in 5052) [ClassicSimilarity], result of:
          0.037639882 = score(doc=5052,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.20165458 = fieldWeight in 5052, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5052)
        0.016046109 = product of:
          0.032092217 = sum of:
            0.032092217 = weight(_text_:management in 5052) [ClassicSimilarity], result of:
              0.032092217 = score(doc=5052,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.18620178 = fieldWeight in 5052, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5052)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Computing Semantic Similarity (SS) between concepts is one of the most critical issues in many domains such as Natural Language Processing and Artificial Intelligence. Over the years, several SS measurement methods have been proposed by exploiting different knowledge resources. Wikipedia provides a large domain-independent encyclopedic repository and a semantic network for computing SS between concepts. Traditional feature-based measures rely on linear combinations of different properties with two main limitations, the insufficient information and the loss of semantic information. In this paper, we propose several hybrid SS measurement approaches by using the Information Content (IC) and features of concepts, which avoid the limitations introduced above. Considering integrating discrete properties into one component, we present two models of semantic representation, called CORM and CARM. Then, we compute SS based on these models and take the IC of categories as a supplement of SS measurement. The evaluation, based on several widely used benchmarks and a benchmark developed by ourselves, sustains the intuitions with respect to human judgments. In summary, our approaches are more efficient in determining SS between concepts and have a better human correlation than previous methods such as Word2Vec and NASARI.
    Source
    Information processing and management. 54(2018) no.6, S.1002-1021
  8. Fernández-Reyes, F.C.; Hermosillo-Valadez, J.; Montes-y-Gómez, M.: ¬A prospect-guided global query expansion strategy using word embeddings (2018) 0.04
    0.035790663 = product of:
      0.053685993 = sum of:
        0.037639882 = weight(_text_:resources in 5090) [ClassicSimilarity], result of:
          0.037639882 = score(doc=5090,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.20165458 = fieldWeight in 5090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5090)
        0.016046109 = product of:
          0.032092217 = sum of:
            0.032092217 = weight(_text_:management in 5090) [ClassicSimilarity], result of:
              0.032092217 = score(doc=5090,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.18620178 = fieldWeight in 5090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5090)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The effectiveness of query expansion methods depends essentially on identifying good candidates, or prospects, semantically related to query terms. Word embeddings have been used recently in an attempt to address this problem. Nevertheless query disambiguation is still necessary as the semantic relatedness of each word in the corpus is modeled, but choosing the right terms for expansion from the standpoint of the un-modeled query semantics remains an open issue. In this paper we propose a novel query expansion method using word embeddings that models the global query semantics from the standpoint of prospect vocabulary terms. The proposed method allows to explore query-vocabulary semantic closeness in such a way that new terms, semantically related to more relevant topics, are elicited and added in function of the query as a whole. The method includes candidates pooling strategies that address disambiguation issues without using exogenous resources. We tested our method with three topic sets over CLEF corpora and compared it across different Information Retrieval models and against another expansion technique using word embeddings as well. Our experiments indicate that our method achieves significant results that outperform the baselines, improving both recall and precision metrics without relevance feedback.
    Source
    Information processing and management. 54(2018) no.1, S.1-13
  9. Efthimiadis, E.N.: End-users' understanding of thesaural knowledge structures in interactive query expansion (1994) 0.04
    0.035590272 = product of:
      0.10677081 = sum of:
        0.10677081 = sum of:
          0.05134755 = weight(_text_:management in 5693) [ClassicSimilarity], result of:
            0.05134755 = score(doc=5693,freq=2.0), product of:
              0.17235184 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.051133685 = queryNorm
              0.29792285 = fieldWeight in 5693, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0625 = fieldNorm(doc=5693)
          0.055423267 = weight(_text_:22 in 5693) [ClassicSimilarity], result of:
            0.055423267 = score(doc=5693,freq=2.0), product of:
              0.17906146 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051133685 = queryNorm
              0.30952093 = fieldWeight in 5693, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5693)
      0.33333334 = coord(1/3)
    
    Date
    30. 3.2001 13:35:22
    Source
    Knowledge organization and quality management: Proc. of the 3rd International ISKO Conference, 20-24 June 1994, Copenhagen, Denmark. Ed.: H. Albrechtsen et al
  10. Hazrina, S.; Sharef, N.M.; Ibrahim, H.; Murad, M.A.A.; Noah, S.A.M.: Review on the advancements of disambiguation in semantic question answering system (2017) 0.03
    0.02863253 = product of:
      0.042948794 = sum of:
        0.030111905 = weight(_text_:resources in 3292) [ClassicSimilarity], result of:
          0.030111905 = score(doc=3292,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.16132367 = fieldWeight in 3292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.03125 = fieldNorm(doc=3292)
        0.0128368875 = product of:
          0.025673775 = sum of:
            0.025673775 = weight(_text_:management in 3292) [ClassicSimilarity], result of:
              0.025673775 = score(doc=3292,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.14896142 = fieldWeight in 3292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3292)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ambiguity is a potential problem in any semantic question answering (SQA) system due to the nature of idiosyncrasy in composing natural language (NL) question and semantic resources. Thus, disambiguation of SQA systems is a field of ongoing research. Ambiguity occurs in SQA because a word or a sentence can have more than one meaning or multiple words in the same language can share the same meaning. Therefore, an SQA system needs disambiguation solutions to select the correct meaning when the linguistic triples matched with multiple KB concepts, and enumerate similar words especially when linguistic triples do not match with any KB concept. The latest development in this field is a solution for SQA systems that is able to process a complex NL question while accessing open-domain data from linked open data (LOD). The contributions in this paper include (1) formulating an SQA conceptual framework based on an in-depth study of existing SQA processes; (2) identifying the ambiguity types, specifically in English based on an interdisciplinary literature review; (3) highlighting the ambiguity types that had been resolved by the previous SQA studies; and (4) analysing the results of the existing SQA disambiguation solutions, the complexity of NL question processing, and the complexity of data retrieval from KB(s) or LOD. The results of this review demonstrated that out of thirteen types of ambiguity identified in the literature, only six types had been successfully resolved by the previous studies. Efforts to improve the disambiguation are in progress for the remaining unresolved ambiguity types to improve the accuracy of the formulated answers by the SQA system. The remaining ambiguity types are potentially resolved in the identified SQA process based on ambiguity scenarios elaborated in this paper. The results of this review also demonstrated that most existing research on SQA systems have treated the processing of the NL question complexity separate from the processing of the KB structure complexity.
    Source
    Information processing and management. 53(2017) no.1, S.52-69
  11. Efthimiadis, E.N.: User choices : a new yardstick for the evaluation of ranking algorithms for interactive query expansion (1995) 0.02
    0.022243923 = product of:
      0.066731766 = sum of:
        0.066731766 = sum of:
          0.032092217 = weight(_text_:management in 5697) [ClassicSimilarity], result of:
            0.032092217 = score(doc=5697,freq=2.0), product of:
              0.17235184 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.051133685 = queryNorm
              0.18620178 = fieldWeight in 5697, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5697)
          0.034639545 = weight(_text_:22 in 5697) [ClassicSimilarity], result of:
            0.034639545 = score(doc=5697,freq=2.0), product of:
              0.17906146 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051133685 = queryNorm
              0.19345059 = fieldWeight in 5697, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5697)
      0.33333334 = coord(1/3)
    
    Date
    22. 2.1996 13:14:10
    Source
    Information processing and management. 31(1995) no.4, S.605-620
  12. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.02
    0.017795136 = product of:
      0.053385407 = sum of:
        0.053385407 = sum of:
          0.025673775 = weight(_text_:management in 1626) [ClassicSimilarity], result of:
            0.025673775 = score(doc=1626,freq=2.0), product of:
              0.17235184 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.051133685 = queryNorm
              0.14896142 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
          0.027711634 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
            0.027711634 = score(doc=1626,freq=2.0), product of:
              0.17906146 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051133685 = queryNorm
              0.15476047 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 66(2014) no.5, S.519-536
  13. Baofu, P.: ¬The future of information architecture : conceiving a better way to understand taxonomy, network, and intelligence (2008) 0.02
    0.017743612 = product of:
      0.053230833 = sum of:
        0.053230833 = weight(_text_:resources in 2257) [ClassicSimilarity], result of:
          0.053230833 = score(doc=2257,freq=4.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.28518265 = fieldWeight in 2257, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2257)
      0.33333334 = coord(1/3)
    
    LCSH
    Information resources
    Subject
    Information resources
  14. Koopman, B.; Zuccon, G.; Bruza, P.; Sitbon, L.; Lawley, M.: Information retrieval as semantic inference : a graph Inference model applied to medical search (2016) 0.02
    0.017385118 = product of:
      0.05215535 = sum of:
        0.05215535 = weight(_text_:resources in 3260) [ClassicSimilarity], result of:
          0.05215535 = score(doc=3260,freq=6.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.2794208 = fieldWeight in 3260, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.03125 = fieldNorm(doc=3260)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a Graph Inference retrieval model that integrates structured knowledge resources, statistical information retrieval methods and inference in a unified framework. Key components of the model are a graph-based representation of the corpus and retrieval driven by an inference mechanism achieved as a traversal over the graph. The model is proposed to tackle the semantic gap problem-the mismatch between the raw data and the way a human being interprets it. We break down the semantic gap problem into five core issues, each requiring a specific type of inference in order to be overcome. Our model and evaluation is applied to the medical domain because search within this domain is particularly challenging and, as we show, often requires inference. In addition, this domain features both structured knowledge resources as well as unstructured text. Our evaluation shows that inference can be effective, retrieving many new relevant documents that are not retrieved by state-of-the-art information retrieval models. We show that many retrieved documents were not pooled by keyword-based search methods, prompting us to perform additional relevance assessment on these new documents. A third of the newly retrieved documents judged were found to be relevant. Our analysis provides a thorough understanding of when and how to apply inference for retrieval, including a categorisation of queries according to the effect of inference. The inference mechanism promoted recall by retrieving new relevant documents not found by previous keyword-based approaches. In addition, it promoted precision by an effective reranking of documents. When inference is used, performance gains can generally be expected on hard queries. However, inference should not be applied universally: for easy, unambiguous queries and queries with few relevant documents, inference did adversely affect effectiveness. These conclusions reflect the fact that for retrieval as inference to be effective, a careful balancing act is involved. Finally, although the Graph Inference model is developed and applied to medical search, it is a general retrieval model applicable to other areas such as web search, where an emerging research trend is to utilise structured knowledge resources for more effective semantic search.
  15. Boyack, K.W.; Wylie,B.N.; Davidson, G.S.: Information Visualization, Human-Computer Interaction, and Cognitive Psychology : Domain Visualizations (2002) 0.02
    0.016329238 = product of:
      0.048987713 = sum of:
        0.048987713 = product of:
          0.097975425 = sum of:
            0.097975425 = weight(_text_:22 in 1352) [ClassicSimilarity], result of:
              0.097975425 = score(doc=1352,freq=4.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.54716086 = fieldWeight in 1352, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1352)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:17:40
  16. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.02
    0.01616512 = product of:
      0.04849536 = sum of:
        0.04849536 = product of:
          0.09699072 = sum of:
            0.09699072 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.09699072 = score(doc=2134,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    30. 3.2001 13:32:22
  17. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.02
    0.015570745 = product of:
      0.046712235 = sum of:
        0.046712235 = sum of:
          0.022464553 = weight(_text_:management in 1633) [ClassicSimilarity], result of:
            0.022464553 = score(doc=1633,freq=2.0), product of:
              0.17235184 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.051133685 = queryNorm
              0.13034125 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
          0.02424768 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
            0.02424768 = score(doc=1633,freq=2.0), product of:
              0.17906146 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051133685 = queryNorm
              0.1354154 = fieldWeight in 1633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1633)
      0.33333334 = coord(1/3)
    
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 66(2014) no.6, S.678-696
  18. Graham, R.Y.: Subject no-hits in an academic library online catalog : an exploration of two potential ameliorations (2004) 0.02
    0.015055953 = product of:
      0.045167856 = sum of:
        0.045167856 = weight(_text_:resources in 178) [ClassicSimilarity], result of:
          0.045167856 = score(doc=178,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.2419855 = fieldWeight in 178, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.046875 = fieldNorm(doc=178)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper describes a study that explored ways in which users' subject-searching problems in a local online catalog might be reduced. On a weekly basis, the author reviewed catalog transaction logs to identify topics of subject searches retrieving no records for which appropriate information resources may actually be represented in the catalog. For topics thus identified, the author explored two potential ameliorations of the no-hits search results through the use of authority record cross-references and pathfinder records providing brief instructions on search refinement. This paper describes the study findings, discusses possible concerns regarding the amelioration methods used, outlines additional steps needed to determine whether the potential ameliorations make a difference to users' searching experiences, and suggests related areas for further research.
  19. Mäkelä, E.; Hyvönen, E.; Saarela, S.; Vilfanen, K.: Application of ontology techniques to view-based semantic serach and browsing (2012) 0.02
    0.015055953 = product of:
      0.045167856 = sum of:
        0.045167856 = weight(_text_:resources in 3264) [ClassicSimilarity], result of:
          0.045167856 = score(doc=3264,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.2419855 = fieldWeight in 3264, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.046875 = fieldNorm(doc=3264)
      0.33333334 = coord(1/3)
    
    Abstract
    We scho how the beenfits of the view-based search method, developed within the information retrieval community, can be extended with ontology-based search, developed within the Semantic Web community, and with semantic recommendations. As a proof of the concept, we have implemented an ontology-and view-based search engine and recommendations system Ontogaotr for RDF(S) repositories. Ontogator is innovative in two ways. Firstly, the RDFS.based ontologies used for annotating metadata are used in the user interface to facilitate view-based information retrieval. The views provide the user with an overview of the repositorys contents and a vocabulary for expressing search queries. Secondlyy, a semantic browsing function is provided by a recommender system. This system enriches instance level metadata by ontologies and provides the user with links to semantically related relevant resources. The semantic linkage is specified in terms of logical rules. To illustrate and discuss the ideas, a deployed application of Ontogator to a photo repository of the Helsinki University Museum is presented.
  20. Hancock-Beaulieu, M.: Interactive query expansion in an OPAC : interface and retrieval issues (1995) 0.01
    0.0128368875 = product of:
      0.03851066 = sum of:
        0.03851066 = product of:
          0.07702132 = sum of:
            0.07702132 = weight(_text_:management in 5089) [ClassicSimilarity], result of:
              0.07702132 = score(doc=5089,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.44688427 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of document and text management. 3(1995) no.2, S.172-185

Years

Languages

  • e 81
  • d 6
  • More… Less…

Types

  • a 82
  • el 5
  • m 2
  • r 1
  • More… Less…