Search (54 results, page 1 of 3)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  1. Wang, Z.; Khoo, C.S.G.; Chaudhry, A.S.: Evaluation of the navigation effectiveness of an organizational taxonomy built on a general classification scheme and domain thesauri (2014) 0.03
    0.032706924 = product of:
      0.081767306 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 1251) [ClassicSimilarity], result of:
              0.04120336 = score(doc=1251,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 1251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1251)
          0.5 = coord(1/2)
        0.061165623 = product of:
          0.12233125 = sum of:
            0.12233125 = weight(_text_:exercises in 1251) [ClassicSimilarity], result of:
              0.12233125 = score(doc=1251,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.47145814 = fieldWeight in 1251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1251)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents an evaluation study of the navigation effectiveness of a multifaceted organizational taxonomy that was built on the Dewey Decimal Classification and several domain thesauri in the area of library and information science education. The objective of the evaluation was to detect deficiencies in the taxonomy and to infer problems of applied construction steps from users' navigation difficulties. The evaluation approach included scenario-based navigation exercises and postexercise interviews. Navigation exercise errors and underlying reasons were analyzed in relation to specific components of the taxonomy and applied construction steps. Guidelines for the construction of the hierarchical structure and categories of an organizational taxonomy using existing general classification schemes and domain thesauri were derived from the evaluation results.
  2. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie (2005) 0.02
    0.023477197 = product of:
      0.11738598 = sum of:
        0.11738598 = sum of:
          0.082784034 = weight(_text_:etc in 1852) [ClassicSimilarity], result of:
            0.082784034 = score(doc=1852,freq=2.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.41891038 = fieldWeight in 1852, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1852)
          0.034601945 = weight(_text_:22 in 1852) [ClassicSimilarity], result of:
            0.034601945 = score(doc=1852,freq=2.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.2708308 = fieldWeight in 1852, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1852)
      0.2 = coord(1/5)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:58
  3. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2005) 0.02
    0.023477197 = product of:
      0.11738598 = sum of:
        0.11738598 = sum of:
          0.082784034 = weight(_text_:etc in 4324) [ClassicSimilarity], result of:
            0.082784034 = score(doc=4324,freq=2.0), product of:
              0.19761753 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.036484417 = queryNorm
              0.41891038 = fieldWeight in 4324, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
          0.034601945 = weight(_text_:22 in 4324) [ClassicSimilarity], result of:
            0.034601945 = score(doc=4324,freq=2.0), product of:
              0.12776221 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.036484417 = queryNorm
              0.2708308 = fieldWeight in 4324, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
      0.2 = coord(1/5)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:25
  4. Caro Castro, C.; Travieso Rodríguez, C.: Ariadne's thread : knowledge structures for browsing in OPAC's (2003) 0.01
    0.013085462 = product of:
      0.032713655 = sum of:
        0.012017647 = product of:
          0.024035294 = sum of:
            0.024035294 = weight(_text_:problems in 2768) [ClassicSimilarity], result of:
              0.024035294 = score(doc=2768,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.15960906 = fieldWeight in 2768, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2768)
          0.5 = coord(1/2)
        0.020696009 = product of:
          0.041392017 = sum of:
            0.041392017 = weight(_text_:etc in 2768) [ClassicSimilarity], result of:
              0.041392017 = score(doc=2768,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.20945519 = fieldWeight in 2768, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2768)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Subject searching is the most common but also the most conflictive searching for end user. The aim of this paper is to check how users expressions match subject headings and to prove if knowledge structure used in online catalogs enhances searching effectiveness. A bibliographic revision about difficulties in subject access and proposed methods to improve it is also presented. For the empirical analysis, transaction logs from two university libraries, online catalogs (CISNE and FAMA) were collected. Results show that more than a quarter of user queries are effective due to an alphabetical subject index approach and browsing through hypertextual links. 1. Introduction Since the 1980's, online public access catalogs (OPAC's) have become usual way to access bibliographic information. During the last two decades the technological development has helped to extend their use, making feasible the access for a whole of users that is getting more and more extensive and heterogeneous, and also to incorporate information resources in electronic formats and to interconnect systems. However, technology seems to have developed faster than our knowledge about the tasks where it has been applied and than the evolution of our capacities for adapting to it. The conceptual model of OPAC has been hardly modified recently, and for interacting with them, users still need to combine the same skills and basic knowledge than at the beginning of its introduction (Borgman, 1986, 2000): a) conceptual knowledge to translate the information need into an appropriate query because of a well-designed mental model of the system, b) semantic and syntactic knowledge to be able to implement that query (access fields, searching type, Boolean logic, etc.) and c) basic technical skills in computing. At present many users have the essential technical skills to make use, with more or less expertise, of a computer. This number is substantially reduced when it is referred to the conceptual, semantic and syntactic knowledge that is necessary to achieve a moderately satisfactory search. An added difficulty arises in subject searching, as users should concrete their unknown information needs in terms that the information retrieval system can understand. Many researches have focused an unskilled searchers' difficulties to enter an effective query. The mental models influence, users assumption about characteristics, structure, contents and operation of the system they interact with have been analysed (Dillon, 2000; Dimitroff, 2000). Another issue that implies difficulties is vocabulary: how to find the right terms to implement a query and to modify it as the case may be. Terminology and expressions characteristics used in searching (Bates, 1993), the match between user terms and the subject headings from the catalog (Carlyle, 1989; Drabensttot, 1996; Drabensttot & Vizine-Goetz, 1994), the incidence of spelling errors (Drabensttot and Weller, 1996; Ferl and Millsap, 1996; Walker and Jones, 1987), users problems
  5. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.01
    0.009448289 = product of:
      0.023620723 = sum of:
        0.013734453 = product of:
          0.027468907 = sum of:
            0.027468907 = weight(_text_:problems in 1163) [ClassicSimilarity], result of:
              0.027468907 = score(doc=1163,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.18241036 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.5 = coord(1/2)
        0.009886269 = product of:
          0.019772539 = sum of:
            0.019772539 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
              0.019772539 = score(doc=1163,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.15476047 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper addresses the problem of information discovery in large collections of text. For users, one of the key problems in working with such collections is determining where to focus their attention. In selecting documents for examination, users must be able to formulate reasonably precise queries. Queries that are too broad will greatly reduce the efficiency of information discovery efforts by overwhelming the users with peripheral information. In order to formulate efficient queries, a mechanism is needed to automatically alert users regarding potentially interesting information contained within the collection. This paper presents the results of an experiment designed to test one approach to generation of such alerts. The technique of latent semantic indexing (LSI) is used to identify relationships among entities of interest. Entity extraction software is used to pre-process the text of the collection so that the LSI space contains representation vectors for named entities in addition to those for individual terms. In the LSI space, the cosine of the angle between the representation vectors for two entities captures important information regarding the degree of association of those two entities. For appropriate choices of entities, determining the entity pairs with the highest mutual cosine values yields valuable information regarding the contents of the text collection. The test database used for the experiment consists of 150,000 news articles. The proposed approach for alert generation is tested using a counterterrorism analysis example. The approach is shown to have significant potential for aiding users in rapidly focusing on information of potential importance in large text collections. The approach also has value in identifying possible use of aliases.
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  6. Boyack, K.W.; Wylie,B.N.; Davidson, G.S.: Information Visualization, Human-Computer Interaction, and Cognitive Psychology : Domain Visualizations (2002) 0.01
    0.0069906483 = product of:
      0.03495324 = sum of:
        0.03495324 = product of:
          0.06990648 = sum of:
            0.06990648 = weight(_text_:22 in 1352) [ClassicSimilarity], result of:
              0.06990648 = score(doc=1352,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.54716086 = fieldWeight in 1352, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1352)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:17:40
  7. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.01
    0.0069203894 = product of:
      0.034601945 = sum of:
        0.034601945 = product of:
          0.06920389 = sum of:
            0.06920389 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.06920389 = score(doc=2134,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    30. 3.2001 13:32:22
  8. Rekabsaz, N. et al.: Toward optimized multimodal concept indexing (2016) 0.00
    0.004943135 = product of:
      0.024715675 = sum of:
        0.024715675 = product of:
          0.04943135 = sum of:
            0.04943135 = weight(_text_:22 in 2751) [ClassicSimilarity], result of:
              0.04943135 = score(doc=2751,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.38690117 = fieldWeight in 2751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2751)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    1. 2.2016 18:25:22
  9. Kozikowski, P. et al.: Support of part-whole relations in query answering (2016) 0.00
    0.004943135 = product of:
      0.024715675 = sum of:
        0.024715675 = product of:
          0.04943135 = sum of:
            0.04943135 = weight(_text_:22 in 2754) [ClassicSimilarity], result of:
              0.04943135 = score(doc=2754,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.38690117 = fieldWeight in 2754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2754)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    1. 2.2016 18:25:22
  10. Marx, E. et al.: Exploring term networks for semantic search over RDF knowledge graphs (2016) 0.00
    0.004943135 = product of:
      0.024715675 = sum of:
        0.024715675 = product of:
          0.04943135 = sum of:
            0.04943135 = weight(_text_:22 in 3279) [ClassicSimilarity], result of:
              0.04943135 = score(doc=3279,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.38690117 = fieldWeight in 3279, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3279)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  11. Kopácsi, S. et al.: Development of a classification server to support metadata harmonization in a long term preservation system (2016) 0.00
    0.004943135 = product of:
      0.024715675 = sum of:
        0.024715675 = product of:
          0.04943135 = sum of:
            0.04943135 = weight(_text_:22 in 3280) [ClassicSimilarity], result of:
              0.04943135 = score(doc=3280,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.38690117 = fieldWeight in 3280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3280)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  12. Sacco, G.M.: Dynamic taxonomies and guided searches (2006) 0.00
    0.0048934543 = product of:
      0.02446727 = sum of:
        0.02446727 = product of:
          0.04893454 = sum of:
            0.04893454 = weight(_text_:22 in 5295) [ClassicSimilarity], result of:
              0.04893454 = score(doc=5295,freq=4.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.38301262 = fieldWeight in 5295, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5295)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 7.2006 17:56:22
  13. Nagao, M.: Knowledge and inference (1990) 0.00
    0.004855863 = product of:
      0.024279313 = sum of:
        0.024279313 = product of:
          0.048558626 = sum of:
            0.048558626 = weight(_text_:problems in 3304) [ClassicSimilarity], result of:
              0.048558626 = score(doc=3304,freq=4.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.322459 = fieldWeight in 3304, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3304)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of ""knowledge"" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intelligence: search and problem solving, methods of making proofs, and the use of knowledge in looking for a proof. There is also a discussion of how to use the knowledge system. The final chapter describes a popular expert system. It describes tools for building expert systems using an example based on Expert Systems-A Practical Introduction by P. Sell (Macmillian, 1985). This type of software is called an ""expert system shell."" This book was written as a textbook for undergraduate students covering only the basics but explaining as much detail as possible.
  14. Hancock-Beaulieu, M.; Fieldhouse, M.; Do, T.: ¬A graphical interface for OKAPI : the design and evaluation of an online catalogue system with direct manipulation interaction for subject access (1994) 0.00
    0.0048070587 = product of:
      0.024035294 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 1318) [ClassicSimilarity], result of:
              0.048070587 = score(doc=1318,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 1318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1318)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    A project to design a graphical user interface for the OKAPI online catalogue search system which uses the basic term weighting probabilistic search engine. Presents a research context of the project with a discussion of interface and functionality issues relating to the design of OPACs. Describes the design methodology and evaluation methodology. Presents the preliminary results of the field trial evaluation. Considers problems encountered in the field trial and discusses contributory factors to the effectiveness of interactive query expansion. Highlights the tension between usability and functionality in highly interactive retrieval and suggests further areas of research
  15. Brezillon, P.; Saker, I.: Modeling context in information seeking (1999) 0.00
    0.0047305166 = product of:
      0.023652581 = sum of:
        0.023652581 = product of:
          0.047305163 = sum of:
            0.047305163 = weight(_text_:etc in 276) [ClassicSimilarity], result of:
              0.047305163 = score(doc=276,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.23937736 = fieldWeight in 276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03125 = fieldNorm(doc=276)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Context plays an important role in a number of domains where reasoning intervenes as in understanding, interpretation, diagnosis, etc. The reason is that reasoning activities heavily rely on a background (or experience) that is generally not made explicit and that gives a contextual dimension to knowledge. On the Web in December 1996, AItaVista gave more than 710000 pages containing the word context, when concept gives only 639000 references. A clear definition of this word stays to be found. There are several formal definitions of this concept (references are given in Brézillon, 1996): a set of preferences and/or beliefs, an infinite and only partially known collection of assumptions, a list of attributes, the product of an interpretation, possible worlds, assumptions under which a statement is true or false. One faces the same situation at the programming level: a collection of context schemas; a path in information retrieval; slots in object-oriented languages; a special, buffer-like data structure; a window on the screen, buttons which are functional customisable and shareable; an interpreter which controls the system's activity; the characteristics of the situation and the goals of the knowledge use; or entities (things or events) related in a certain way that permits to listen what is said and what is not said. Context is often assimilated at a set of restrictions (e.g., preconditions) that limit access to parts of the applications. The first works considering context explicitly are in Natural Language. Researchers in this domain focus on the linguistic context, sometimes associated with other types of contexts as: semantic context, cognitive context, physical and perceptual context, and social context (Bunt, 1997).
  16. Agarwal, N.K.: Exploring context in information behavior : seeker, situation, surroundings, and shared identities (2018) 0.00
    0.0047305166 = product of:
      0.023652581 = sum of:
        0.023652581 = product of:
          0.047305163 = sum of:
            0.047305163 = weight(_text_:etc in 4992) [ClassicSimilarity], result of:
              0.047305163 = score(doc=4992,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.23937736 = fieldWeight in 4992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4992)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The field of human information behavior runs the gamut of processes from the realization of a need or gap in understanding, to the search for information from one or more sources to fill that gap, to the use of that information to complete a task at hand or to satisfy a curiosity, as well as other behaviors such as avoiding information or finding information serendipitously. Designers of mechanisms, tools, and computer-based systems to facilitate this seeking and search process often lack a full knowledge of the context surrounding the search. This context may vary depending on the job or role of the person; individual characteristics such as personality, domain knowledge, age, gender, perception of self, etc.; the task at hand; the source and the channel and their degree of accessibility and usability; and the relationship that the seeker shares with the source. Yet researchers have yet to agree on what context really means. While there have been various research studies incorporating context, and biennial conferences on context in information behavior, there lacks a clear definition of what context is, what its boundaries are, and what elements and variables comprise context. In this book, we look at the many definitions of and the theoretical and empirical studies on context, and I attempt to map the conceptual space of context in information behavior. I propose theoretical frameworks to map the boundaries, elements, and variables of context. I then discuss how to incorporate these frameworks and variables in the design of research studies on context. We then arrive at a unified definition of context. This book should provide designers of search systems a better understanding of context as they seek to meet the needs and demands of information seekers. It will be an important resource for researchers in Library and Information Science, especially doctoral students looking for one resource that covers an exhaustive range of the most current literature related to context, the best selection of classics, and a synthesis of these into theoretical frameworks and a unified definition. The book should help to move forward research in the field by clarifying the elements, variables, and views that are pertinent. In particular, the list of elements to be considered, and the variables associated with each element will be extremely useful to researchers wanting to include the influences of context in their studies.
  17. ALA / Subcommittee on Subject Relationships/Reference Structures: Final Report to the ALCTS/CCS Subject Analysis Committee (1997) 0.00
    0.0041392017 = product of:
      0.020696009 = sum of:
        0.020696009 = product of:
          0.041392017 = sum of:
            0.041392017 = weight(_text_:etc in 1800) [ClassicSimilarity], result of:
              0.041392017 = score(doc=1800,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.20945519 = fieldWeight in 1800, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1800)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The SAC Subcommittee on Subject Relationships/Reference Structures was authorized at the 1995 Midwinter Meeting and appointed shortly before Annual Conference. Its creation was one result of a discussion of how (and why) to promote the display and use of broader-term subject heading references, and its charge reads as follows: To investigate: (1) the kinds of relationships that exist between subjects, the display of which are likely to be useful to catalog users; (2) how these relationships are or could be recorded in authorities and classification formats; (3) options for how these relationships should be presented to users of online and print catalogs, indexes, lists, etc. By the summer 1996 Annual Conference, make some recommendations to SAC about how to disseminate the information and/or implement changes. At that time assess the need for additional time to investigate these issues. The Subcommittee's work on each of the imperatives in the charge was summarized in a report issued at the 1996 Annual Conference (Appendix A). Highlights of this work included the development of a taxonomy of 165 subject relationships; a demonstration that, using existing MARC coding, catalog systems could be programmed to generate references they do not currently support; and an examination of reference displays in several CD-ROM database products. Since that time, work has continued on identifying term relationships and display options; on tracking research, discussion, and implementation of subject relationships in information systems; and on compiling a list of further research needs.
  18. Oard, D.W.: Alternative approaches for cross-language text retrieval (1997) 0.00
    0.0041392017 = product of:
      0.020696009 = sum of:
        0.020696009 = product of:
          0.041392017 = sum of:
            0.041392017 = weight(_text_:etc in 1164) [ClassicSimilarity], result of:
              0.041392017 = score(doc=1164,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.20945519 = fieldWeight in 1164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1164)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Multilingual text retrieval can be defined as selection of useful documents from collections that may contain several languages (English, French, Chinese, etc.). This formulation allows for the possibility that individual documents might contain more than one language, a common occurrence in some applications. Both cross-language and within-language retrieval are included in this formulation, but it is the cross-language aspect of the problem which distinguishes multilingual text retrieval from its well studied monolingual counterpart. At the SIGIR 96 workshop on "Cross-Linguistic Information Retrieval" the participants discussed the proliferation of terminology being used to describe the field and settled on "Cross-Language" as the best single description of the salient aspect of the problem. "Multilingual" was felt to be too broad, since that term has also been used to describe systems able to perform within-language retrieval in more than one language but that lack any cross-language capability. "Cross-lingual" and "cross-linguistic" were felt to be equally good descriptions of the field, but "crosslanguage" was selected as the preferred term in the interest of standardization. Unfortunately, at about the same time the U.S. Defense Advanced Research Projects Agency (DARPA) introduced "translingual" as their preferred term, so we are still some distance from reaching consensus on this matter.
  19. Nie, J.-Y.: Query expansion and query translation as logical inference (2003) 0.00
    0.0041203364 = product of:
      0.02060168 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 1425) [ClassicSimilarity], result of:
              0.04120336 = score(doc=1425,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 1425, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1425)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    A number of studies have examined the problems of query expansion in monolingual Information Retrieval (IR), and query translation for crosslanguage IR. However, no link has been made between them. This article first shows that query translation is a special case of query expansion. There is also another set of studies an inferential IR. Again, there is no relationship established with query translation or query expansion. The second claim of this article is that logical inference is a general form that covers query expansion and query translation. This analysis provides a unified view of different subareas of IR. We further develop the inferential IR approach in two particular contexts: using fuzzy logic and probability theory. The evaluation formulas obtained are shown to strongly correspond to those used in other IR models. This indicates that inference is indeed the core of advanced IR.
  20. Cool, C.; Spink, A.: Issues of context in information retrieval (IR) : an introduction to the special issue (2002) 0.00
    0.0041203364 = product of:
      0.02060168 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 2587) [ClassicSimilarity], result of:
              0.04120336 = score(doc=2587,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 2587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2587)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The subject of context has received a great deal of attention in the information retrieval (IR) literature over the past decade, primarily in studies of information seeking and IR interactions. Recently, attention to context in IR has expanded to address new problems in new environments. In this paper we outline five overlapping dimensions of context which we believe to be important constituent elements and we discuss how they are related to different issues in IR research. The papers in this special issue are summarized with respect to how they represent work that is being conducted within these dimensions of context. We conclude with future areas of research which are needed in order to fully understand the multidimensional nature of context in IR.

Authors

Years

Languages

  • e 47
  • d 4
  • chi 1
  • f 1
  • More… Less…

Types

  • a 44
  • el 7
  • m 5
  • r 2
  • x 1
  • More… Less…

Classifications