Search (5 results, page 1 of 1)

  • × language_ss:"e"
  • × author_ss:"Krause, J."
  1. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.00
    0.0042066295 = product of:
      0.016826518 = sum of:
        0.016826518 = weight(_text_:information in 6061) [ClassicSimilarity], result of:
          0.016826518 = score(doc=6061,freq=16.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27429342 = fieldWeight in 6061, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
      0.25 = coord(1/4)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
    Source
    Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
  2. Hellweg, H.; Krause, J.; Mandl, T.; Marx, J.; Müller, M.N.O.; Mutschke, P.; Strötgen, R.: Treatment of semantic heterogeneity in information retrieval (2001) 0.00
    0.004121639 = product of:
      0.016486555 = sum of:
        0.016486555 = weight(_text_:information in 6560) [ClassicSimilarity], result of:
          0.016486555 = score(doc=6560,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2687516 = fieldWeight in 6560, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6560)
      0.25 = coord(1/4)
    
    Abstract
    Nowadays, users of information services are faced with highly decentralised, heterogeneous document sources with different content analysis. Semantic heterogeneity occurs e.g. when resources using different systems for content description are searched using a simple query system. This report describes several approaches of handling semantic heterogeneity used in projects of the German Social Science Information Centre
  3. Krause, J.: Current research information as part of digital libraries and the heterogeneity problem : integrated searches in the context of databases with different content analyses (2002) 0.00
    0.003762524 = product of:
      0.015050096 = sum of:
        0.015050096 = weight(_text_:information in 3593) [ClassicSimilarity], result of:
          0.015050096 = score(doc=3593,freq=20.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2453355 = fieldWeight in 3593, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3593)
      0.25 = coord(1/4)
    
    Abstract
    Users of scientific information are now faced with a highly decentralized, heterogeneous document base with varied content analysis methods. Traditional providers of information such as libraries or information centers have been increasingly joined by scientists themselves, who are developing independent services of varying scope, relevance and type of development in the WWW. Theoretically, groups that have gathered current research information (CRI), literature or factual information an specialized subjects can emerge anywhere in the world. One consequence of this is the presence of various inconsistencies: - Relevant, quality-controlled data can be found right next to irrelevant and perhaps demonstrably erroneous data. - In a system of this kind, descriptor A can assume the most disparate meanings. Even in the narrower context of specialized information, descriptor A, which has been extracted in an intellectually and qualitatively correct manner, and with much Gare and attention, from a highly relevant document, is not to be compared with a term A that has been provided by automatic indexing in some peripheral area. Thus, the main problem to be solved is as follows: users must be supplied with heterogeneous data from different sources, modalities and content analysis processes via a visual user interface without inconsistencies in content analysis, for example, seriously impairing the quality of the search results. A scientist who, for example, is looking for social science information an subject X does not first want to search the social science literature database SOLIS and the current research database FORIS, and then the library catalogues of the special compilation area of social sciences at the library catalogues and in the WWW - each time using different search strategies. He wants to phrase his search query only once in the terminology to which he is accustomed without dealing with the remaining problems. Closer analysis of this problems Shows that narrow technological concepts, even if they are undoubtedly necessary, are not sufficient an their own. They must be supplemented by new conceptual considerations relating to the treatment of breaks in consistency between the different processes of content analysis. Acceptable solutions are only obtained when both aspects are combined. The IZ research group (Bonn, Germany) is working an this aspect in four different projects: Carmen, ViBSoz, ELVIRA and the ETB project. Initial solutions for transfer modules are available now and will be discussed.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  4. Tauchert, W.; Hospodarsky, J.; Krause, J.; Schneider, C.; Womser-Hacker, C.: Effects of linguistic functions on information retrieval in a German language full-text database : comparison between retrieval in abstract and full text (1991) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 465) [ClassicSimilarity], result of:
          0.014277775 = score(doc=465,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 465, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=465)
      0.25 = coord(1/4)
    
  5. Krause, J.; Marx, J.; Roppel, S.; Schudnagis, M.; Wolff, C.; Womser-Hacker, C.: Multimodality and object orientation in an intelligent materials information system (1993-94) 0.00
    0.003091229 = product of:
      0.012364916 = sum of:
        0.012364916 = weight(_text_:information in 12) [ClassicSimilarity], result of:
          0.012364916 = score(doc=12,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.20156369 = fieldWeight in 12, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=12)
      0.25 = coord(1/4)
    
    Abstract
    In this paper we present a multimodal design concept for a materials information system interface. The project WING-IIR combines both form-oriented and natural language database in a GUI-based environment, giving the user a choice of query modes. Our design is embedded in a tool-based, object oriented structure which allows for adequate interpretation and usability for both, novice and expert users. Implementing context-sensitivity and transparency between query modalities and different levels of data granularity further help in solving difficult materials problems. In addition a number of different Intelligent Information Retrieval (IIR) modules complement the basic database interface: a stereotype-based user model reduces interface complexity by adapting to the users' actual interests; the WING-GRAPH component allows for graphical retrieval of materials curves, i.e. users may manipulate graphical data representations in order to query the database and a fuzzy-WING component is proposed for modelling vagueness in natural language queries as well as for vague interpretation of seemingly exact queries