Search (86 results, page 1 of 5)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  1. Kruschwitz, U.; AI-Bakour, H.: Users want more sophisticated search assistants : results of a task-based evaluation (2005) 0.03
    0.02508953 = product of:
      0.10035812 = sum of:
        0.10035812 = weight(_text_:markup in 4575) [ClassicSimilarity], result of:
          0.10035812 = score(doc=4575,freq=2.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.36310613 = fieldWeight in 4575, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4575)
      0.25 = coord(1/4)
    
    Abstract
    The Web provides a massive knowledge source, as do intranets and other electronic document collections. However, much of that knowledge is encoded implicitly and cannot be applied directly without processing into some more appropriate structures. Searching, browsing, question answering, for example, could all benefit from domain-specific knowledge contained in the documents, and in applications such as simple search we do not actually need very "deep" knowledge structures such as ontologies, but we can get a long way with a model of the domain that consists of term hierarchies. We combine domain knowledge automatically acquired by exploiting the documents' markup structure with knowledge extracted an the fly to assist a user with ad hoc search requests. Such a search system can suggest query modification options derived from the actual data and thus guide a user through the space of documents. This article gives a detailed account of a task-based evaluation that compares a search system that uses the outlined domain knowledge with a standard search system. We found that users do use the query modification suggestions proposed by the system. The main conclusion we can draw from this evaluation, however, is that users prefer a system that can suggest query modifications over a standard search engine, which simply presents a ranked list of documents. Most interestingly, we observe this user preference despite the fact that the baseline system even performs slightly better under certain criteria.
  2. Lund, K.; Burgess, C.; Atchley, R.A.: Semantic and associative priming in high-dimensional semantic space (1995) 0.01
    0.014989476 = product of:
      0.059957903 = sum of:
        0.059957903 = product of:
          0.08993685 = sum of:
            0.05005701 = weight(_text_:language in 2151) [ClassicSimilarity], result of:
              0.05005701 = score(doc=2151,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.30342668 = fieldWeight in 2151, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2151)
            0.039879844 = weight(_text_:22 in 2151) [ClassicSimilarity], result of:
              0.039879844 = score(doc=2151,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.2708308 = fieldWeight in 2151, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2151)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Object
    Hyperspace Analogue to Language
    Source
    Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society: July 22 - 25, 1995, University of Pittsburgh / ed. by Johanna D. Moore and Jill Fain Lehmann
  3. Järvelin, K.; Kristensen, J.; Niemi, T.; Sormunen, E.; Keskustalo, H.: ¬A deductive data model for query expansion (1996) 0.01
    0.012848122 = product of:
      0.05139249 = sum of:
        0.05139249 = product of:
          0.07708873 = sum of:
            0.042906005 = weight(_text_:language in 2230) [ClassicSimilarity], result of:
              0.042906005 = score(doc=2230,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.26008 = fieldWeight in 2230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2230)
            0.034182724 = weight(_text_:22 in 2230) [ClassicSimilarity], result of:
              0.034182724 = score(doc=2230,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.23214069 = fieldWeight in 2230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2230)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    We present a deductive data model for concept-based query expansion. It is based on three abstraction levels: the conceptual, linguistic and occurrence levels. Concepts and relationships among them are represented at the conceptual level. The expression level represents natural language expressions for concepts. Each expression has one or more matching models at the occurrence level. Each model specifies the matching of the expression in database indices built in varying ways. The data model supports a concept-based query expansion and formulation tool, the ExpansionTool, for environments providing heterogeneous IR systems. Expansion is controlled by adjustable matching reliability.
    Source
    Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR '96), Zürich, Switzerland, August 18-22, 1996. Eds.: H.P. Frei et al
  4. Adhikari, A.; Dutta, B.; Dutta, A.; Mondal, D.; Singh, S.: ¬An intrinsic information content-based semantic similarity measure considering the disjoint common subsumers of concepts of an ontology (2018) 0.01
    0.010749864 = product of:
      0.042999458 = sum of:
        0.042999458 = product of:
          0.064499184 = sum of:
            0.03575501 = weight(_text_:language in 4372) [ClassicSimilarity], result of:
              0.03575501 = score(doc=4372,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.21673335 = fieldWeight in 4372, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4372)
            0.028744178 = weight(_text_:29 in 4372) [ClassicSimilarity], result of:
              0.028744178 = score(doc=4372,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.19432661 = fieldWeight in 4372, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4372)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Finding similarity between concepts based on semantics has become a new trend in many applications (e.g., biomedical informatics, natural language processing). Measuring the Semantic Similarity (SS) with higher accuracy is a challenging task. In this context, the Information Content (IC)-based SS measure has gained popularity over the others. The notion of IC evolves from the science of information theory. Information theory has very high potential to characterize the semantics of concepts. Designing an IC-based SS framework comprises (i) an IC calculator, and (ii) an SS calculator. In this article, we propose a generic intrinsic IC-based SS calculator. We also introduce here a new structural aspect of an ontology called DCS (Disjoint Common Subsumers) that plays a significant role in deciding the similarity between two concepts. We evaluated our proposed similarity calculator with the existing intrinsic IC-based similarity calculators, as well as corpora-dependent similarity calculators using several benchmark data sets. The experimental results show that the proposed similarity calculator produces a high correlation with human evaluation over the existing state-of-the-art IC-based similarity calculators.
    Date
    29. 7.2018 11:40:33
  5. Neumann. M.: HAL: Hyperspace Analogue to Language (2012) 0.01
    0.010113044 = product of:
      0.040452175 = sum of:
        0.040452175 = product of:
          0.12135652 = sum of:
            0.12135652 = weight(_text_:language in 966) [ClassicSimilarity], result of:
              0.12135652 = score(doc=966,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.7356174 = fieldWeight in 966, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.09375 = fieldNorm(doc=966)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Object
    Hyperspace Analogue to Language
  6. Oard, D.W.: Alternative approaches for cross-language text retrieval (1997) 0.01
    0.009327574 = product of:
      0.037310295 = sum of:
        0.037310295 = product of:
          0.111930884 = sum of:
            0.111930884 = weight(_text_:language in 1164) [ClassicSimilarity], result of:
              0.111930884 = score(doc=1164,freq=40.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.6784828 = fieldWeight in 1164, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1164)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The explosive growth of the Internet and other sources of networked information have made automatic mediation of access to networked information sources an increasingly important problem. Much of this information is expressed as electronic text, and it is becoming practical to automatically convert some printed documents and recorded speech to electronic text as well. Thus, automated systems capable of detecting useful documents are finding widespread application. With even a small number of languages it can be inconvenient to issue the same query repeatedly in every language, so users who are able to read more than one language will likely prefer a multilingual text retrieval system over a collection of monolingual systems. And since reading ability in a language does not always imply fluent writing ability in that language, such users will likely find cross-language text retrieval particularly useful for languages in which they are less confident of their ability to express their information needs effectively. The use of such systems can be also be beneficial if the user is able to read only a single language. For example, when only a small portion of the document collection will ever be examined by the user, performing retrieval before translation can be significantly more economical than performing translation before retrieval. So when the application is sufficiently important to justify the time and effort required for translation, those costs can be minimized if an effective cross-language text retrieval system is available. Even when translation is not available, there are circumstances in which cross-language text retrieval could be useful to a monolingual user. For example, a researcher might find a paper published in an unfamiliar language useful if that paper contains references to works by the same author that are in the researcher's native language.
    Multilingual text retrieval can be defined as selection of useful documents from collections that may contain several languages (English, French, Chinese, etc.). This formulation allows for the possibility that individual documents might contain more than one language, a common occurrence in some applications. Both cross-language and within-language retrieval are included in this formulation, but it is the cross-language aspect of the problem which distinguishes multilingual text retrieval from its well studied monolingual counterpart. At the SIGIR 96 workshop on "Cross-Linguistic Information Retrieval" the participants discussed the proliferation of terminology being used to describe the field and settled on "Cross-Language" as the best single description of the salient aspect of the problem. "Multilingual" was felt to be too broad, since that term has also been used to describe systems able to perform within-language retrieval in more than one language but that lack any cross-language capability. "Cross-lingual" and "cross-linguistic" were felt to be equally good descriptions of the field, but "crosslanguage" was selected as the preferred term in the interest of standardization. Unfortunately, at about the same time the U.S. Defense Advanced Research Projects Agency (DARPA) introduced "translingual" as their preferred term, so we are still some distance from reaching consensus on this matter.
    I will not attempt to draw a sharp distinction between retrieval and filtering in this survey. Although my own work on adaptive cross-language text filtering has led me to make this distinction fairly carefully in other presentations (c.f., (Oard 1997b)), such an proach does little to help understand the fundamental techniques which have been applied or the results that have been obtained in this case. Since it is still common to view filtering (detection of useful documents in dynamic document streams) as a kind of retrieval, will simply adopt that perspective here.
  7. Brezillon, P.; Saker, I.: Modeling context in information seeking (1999) 0.01
    0.008599891 = product of:
      0.034399565 = sum of:
        0.034399565 = product of:
          0.051599346 = sum of:
            0.028604005 = weight(_text_:language in 276) [ClassicSimilarity], result of:
              0.028604005 = score(doc=276,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.17338668 = fieldWeight in 276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.03125 = fieldNorm(doc=276)
            0.022995342 = weight(_text_:29 in 276) [ClassicSimilarity], result of:
              0.022995342 = score(doc=276,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.15546128 = fieldWeight in 276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=276)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Context plays an important role in a number of domains where reasoning intervenes as in understanding, interpretation, diagnosis, etc. The reason is that reasoning activities heavily rely on a background (or experience) that is generally not made explicit and that gives a contextual dimension to knowledge. On the Web in December 1996, AItaVista gave more than 710000 pages containing the word context, when concept gives only 639000 references. A clear definition of this word stays to be found. There are several formal definitions of this concept (references are given in Brézillon, 1996): a set of preferences and/or beliefs, an infinite and only partially known collection of assumptions, a list of attributes, the product of an interpretation, possible worlds, assumptions under which a statement is true or false. One faces the same situation at the programming level: a collection of context schemas; a path in information retrieval; slots in object-oriented languages; a special, buffer-like data structure; a window on the screen, buttons which are functional customisable and shareable; an interpreter which controls the system's activity; the characteristics of the situation and the goals of the knowledge use; or entities (things or events) related in a certain way that permits to listen what is said and what is not said. Context is often assimilated at a set of restrictions (e.g., preconditions) that limit access to parts of the applications. The first works considering context explicitly are in Natural Language. Researchers in this domain focus on the linguistic context, sometimes associated with other types of contexts as: semantic context, cognitive context, physical and perceptual context, and social context (Bunt, 1997).
    Date
    21. 3.2002 19:29:27
  8. Niemi, T.; Jämsen, J.: ¬A query language for discovering semantic associations, part II : sample queries and query evaluation (2007) 0.01
    0.00729846 = product of:
      0.02919384 = sum of:
        0.02919384 = product of:
          0.08758152 = sum of:
            0.08758152 = weight(_text_:language in 580) [ClassicSimilarity], result of:
              0.08758152 = score(doc=580,freq=12.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.5308861 = fieldWeight in 580, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=580)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    In our query language introduced in Part I (Journal of the American Society for Information Science and Technology. 58(2007) no.11, S.1559-1568) the user can formulate queries to find out (possibly complex) semantic relationships among entities. In this article we demonstrate the usage of our query language and discuss the new applications that it supports. We categorize several query types and give sample queries. The query types are categorized based on whether the entities specified in a query are known or unknown to the user in advance, and whether text information in documents is utilized. Natural language is used to represent the results of queries in order to facilitate correct interpretation by the user. We discuss briefly the issues related to the prototype implementation of the query language and show that an independent operation like Rho (Sheth et al., 2005; Anyanwu & Sheth, 2002, 2003), which presupposes entities of interest to be known in advance, is exceedingly inefficient in emulating the behavior of our query language. The discussion also covers potential problems, and challenges for future work.
  9. Järvelin, K.; Niemi, T.: Deductive information retrieval based on classifications (1993) 0.01
    0.007151001 = product of:
      0.028604005 = sum of:
        0.028604005 = product of:
          0.08581201 = sum of:
            0.08581201 = weight(_text_:language in 2229) [ClassicSimilarity], result of:
              0.08581201 = score(doc=2229,freq=8.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.52016 = fieldWeight in 2229, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2229)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Modern fact databses contain abundant data classified through several classifications. Typically, users msut consult these classifications in separate manuals or files, thus making their effective use difficult. Contemporary database systems do little support deductive use of classifications. In this study we show how deductive data management techniques can be applied to the utilization of data value classifications. Computation of transitive class relationships is of primary importance here. We define a representation of classifications which supports transitive computation and present an operation-oriented deductive query language tailored for classification-based deductive information retrieval. The operations of this language are on the same abstraction level as relational algebra operations and can be integrated with these to form a powerful and flexible query language for deductive information retrieval. We define the integration of these operations and demonstrate the usefulness of the language in terms of several sample queries
  10. Boyack, K.W.; Wylie,B.N.; Davidson, G.S.: Information Visualization, Human-Computer Interaction, and Cognitive Psychology : Domain Visualizations (2002) 0.01
    0.0067141214 = product of:
      0.026856486 = sum of:
        0.026856486 = product of:
          0.08056945 = sum of:
            0.08056945 = weight(_text_:22 in 1352) [ClassicSimilarity], result of:
              0.08056945 = score(doc=1352,freq=4.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.54716086 = fieldWeight in 1352, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1352)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:17:40
  11. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.01
    0.0066466406 = product of:
      0.026586562 = sum of:
        0.026586562 = product of:
          0.07975969 = sum of:
            0.07975969 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.07975969 = score(doc=2134,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    30. 3.2001 13:32:22
  12. Lund, K.; Burgess, C.: Producing high-dimensional semantic spaces from lexical co-occurrence (1996) 0.01
    0.00619295 = product of:
      0.0247718 = sum of:
        0.0247718 = product of:
          0.0743154 = sum of:
            0.0743154 = weight(_text_:language in 1704) [ClassicSimilarity], result of:
              0.0743154 = score(doc=1704,freq=6.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.45047188 = fieldWeight in 1704, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1704)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    A procedure that processes a corpus of text and produces numeric vectors containing information about its meanings for each word is presented. This procedure is applied to a large corpus of natural language text taken from Usenet, and the resulting vectors are examined to determine what information is contained within them. These vectors provide the coordinates in a high-dimensional space in which word relationships can be analyzed. Analyses of both vector similarity and multidimensional scaling demonstrate that there is significant semantic information carried in the vectors. A comparison of vector similarity with human reaction times in a single-word priming experiment is presented. These vectors provide the basis for a representational model of semantic memory, hyperspace analogue to language (HAL).
    Object
    Hyperspace Analogue to Language
  13. Weiermann, S.L.: Semantische Netze und Begriffsdeskription in der Wissensrepräsentation (2000) 0.01
    0.0058992757 = product of:
      0.023597103 = sum of:
        0.023597103 = product of:
          0.070791304 = sum of:
            0.070791304 = weight(_text_:language in 3001) [ClassicSimilarity], result of:
              0.070791304 = score(doc=3001,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.42911017 = fieldWeight in 3001, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3001)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    LCSH
    German language / Semantics
    Subject
    German language / Semantics
  14. Ross, J.: ¬A new way of information retrieval : 3-D indexing and concept mapping (2000) 0.01
    0.0057488354 = product of:
      0.022995342 = sum of:
        0.022995342 = product of:
          0.06898602 = sum of:
            0.06898602 = weight(_text_:29 in 6171) [ClassicSimilarity], result of:
              0.06898602 = score(doc=6171,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.46638384 = fieldWeight in 6171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6171)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    25. 2.1997 10:29:16
  15. Shiri, A.A.; Revie, C.; Chowdhury, G.: Thesaurus-enhanced search interfaces (2002) 0.01
    0.0057488354 = product of:
      0.022995342 = sum of:
        0.022995342 = product of:
          0.06898602 = sum of:
            0.06898602 = weight(_text_:29 in 3807) [ClassicSimilarity], result of:
              0.06898602 = score(doc=3807,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.46638384 = fieldWeight in 3807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3807)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    18. 5.2002 17:29:00
  16. Shiri, A.A.; Revie, C.: ¬The effects of topic complexity and familiarity on cognitive and physical moves in a thesaurus-enhanced search environment (2003) 0.01
    0.0057488354 = product of:
      0.022995342 = sum of:
        0.022995342 = product of:
          0.06898602 = sum of:
            0.06898602 = weight(_text_:29 in 4695) [ClassicSimilarity], result of:
              0.06898602 = score(doc=4695,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.46638384 = fieldWeight in 4695, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4695)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Journal of information science. 29(2003) no.6, S.517-
  17. Niemi, T.; Jämsen , J.: ¬A query language for discovering semantic associations, part I : approach and formal definition of query primitives (2007) 0.01
    0.0051607913 = product of:
      0.020643165 = sum of:
        0.020643165 = product of:
          0.061929494 = sum of:
            0.061929494 = weight(_text_:language in 591) [ClassicSimilarity], result of:
              0.061929494 = score(doc=591,freq=6.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.3753932 = fieldWeight in 591, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=591)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    In contemporary query languages, the user is responsible for navigation among semantically related data. Because of the huge amount of data and the complex structural relationships among data in modern applications, it is unrealistic to suppose that the user could know completely the content and structure of the available information. There are several query languages whose purpose is to facilitate navigation in unknown structures of databases. However, the background assumption of these languages is that the user knows how data are related to each other semantically in the structure at hand. So far only little attention has been paid to how unknown semantic associations among available data can be discovered. We address this problem in this article. A semantic association between two entities can be constructed if a sequence of relationships expressed explicitly in a database can be found that connects these entities to each other. This sequence may contain several other entities through which the original entities are connected to each other indirectly. We introduce an expressive and declarative query language for discovering semantic associations. Our query language is able, for example, to discover semantic associations between entities for which only some of the characteristics are known. Further, it integrates the manipulation of semantic associations with the manipulation of documents that may contain information on entities in semantic associations.
  18. Zenz, G.; Zhou, X.; Minack, E.; Siberski, W.; Nejdl, W.: Interactive query construction for keyword search on the Semantic Web (2012) 0.01
    0.0051607913 = product of:
      0.020643165 = sum of:
        0.020643165 = product of:
          0.061929494 = sum of:
            0.061929494 = weight(_text_:language in 430) [ClassicSimilarity], result of:
              0.061929494 = score(doc=430,freq=6.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.3753932 = fieldWeight in 430, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=430)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    With the advance of the semantic Web, increasing amounts of data are available in a structured and machine-understandable form. This opens opportunities for users to employ semantic queries instead of simple keyword-based ones to accurately express the information need. However, constructing semantic queries is a demanding task for human users [11]. To compose a valid semantic query, a user has to (1) master a query language (e.g., SPARQL) and (2) acquire sufficient knowledge about the ontology or the schema of the data source. While there are systems which support this task with visual tools [21, 26] or natural language interfaces [3, 13, 14, 18], the process of query construction can still be complex and time consuming. According to [24], users prefer keyword search, and struggle with the construction of semantic queries although being supported with a natural language interface. Several keyword search approaches have already been proposed to ease information seeking on semantic data [16, 32, 35] or databases [1, 31]. However, keyword queries lack the expressivity to precisely describe the user's intent. As a result, ranking can at best put query intentions of the majority on top, making it impossible to take the intentions of all users into consideration.
  19. Roy, R.S.; Agarwal, S.; Ganguly, N.; Choudhury, M.: Syntactic complexity of Web search queries through the lenses of language models, networks and users (2016) 0.01
    0.0051607913 = product of:
      0.020643165 = sum of:
        0.020643165 = product of:
          0.061929494 = sum of:
            0.061929494 = weight(_text_:language in 3188) [ClassicSimilarity], result of:
              0.061929494 = score(doc=3188,freq=6.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.3753932 = fieldWeight in 3188, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3188)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Across the world, millions of users interact with search engines every day to satisfy their information needs. As the Web grows bigger over time, such information needs, manifested through user search queries, also become more complex. However, there has been no systematic study that quantifies the structural complexity of Web search queries. In this research, we make an attempt towards understanding and characterizing the syntactic complexity of search queries using a multi-pronged approach. We use traditional statistical language modeling techniques to quantify and compare the perplexity of queries with natural language (NL). We then use complex network analysis for a comparative analysis of the topological properties of queries issued by real Web users and those generated by statistical models. Finally, we conduct experiments to study whether search engine users are able to identify real queries, when presented along with model-generated ones. The three complementary studies show that the syntactic structure of Web queries is more complex than what n-grams can capture, but simpler than NL. Queries, thus, seem to represent an intermediate stage between syntactic and non-syntactic communication.
  20. Yan, X.; Li, X.; Song, D.: ¬A correlation analysis on LSA and HAL semantic space models (2004) 0.01
    0.005056522 = product of:
      0.020226087 = sum of:
        0.020226087 = product of:
          0.06067826 = sum of:
            0.06067826 = weight(_text_:language in 2152) [ClassicSimilarity], result of:
              0.06067826 = score(doc=2152,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.3678087 = fieldWeight in 2152, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2152)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    In this paper, we compare a well-known semantic spacemodel, Latent Semantic Analysis (LSA) with another model, Hyperspace Analogue to Language (HAL) which is widely used in different area, especially in automatic query refinement. We conduct this comparative analysis to prove our hypothesis that with respect to ability of extracting the lexical information from a corpus of text, LSA is quite similar to HAL. We regard HAL and LSA as black boxes. Through a Pearson's correlation analysis to the outputs of these two black boxes, we conclude that LSA highly co-relates with HAL and thus there is a justification that LSA and HAL can potentially play a similar role in the area of facilitating automatic query refinement. This paper evaluates LSA in a new application area and contributes an effective way to compare different semantic space models.
    Object
    Hypertspace Analogue to Language

Years

Languages

  • e 76
  • d 9
  • f 1
  • More… Less…

Types

  • a 78
  • el 7
  • m 4
  • x 1
  • More… Less…