Search (205 results, page 1 of 11)

  • × type_ss:"x"
  1. Toussi, M.: Information Retrieval am Beispiel der Wide Area Information Server (WAIS) und dem World Wide Web (WWW) (1996) 0.16
    0.15596224 = product of:
      0.3119245 = sum of:
        0.18207194 = weight(_text_:wide in 5965) [ClassicSimilarity], result of:
          0.18207194 = score(doc=5965,freq=4.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.9692284 = fieldWeight in 5965, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.109375 = fieldNorm(doc=5965)
        0.06984606 = weight(_text_:web in 5965) [ClassicSimilarity], result of:
          0.06984606 = score(doc=5965,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.50479853 = fieldWeight in 5965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=5965)
        0.06000649 = weight(_text_:retrieval in 5965) [ClassicSimilarity], result of:
          0.06000649 = score(doc=5965,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.46789268 = fieldWeight in 5965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=5965)
      0.5 = coord(3/6)
    
  2. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.10
    0.098663256 = product of:
      0.14799488 = sum of:
        0.044892162 = product of:
          0.13467649 = sum of:
            0.13467649 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13467649 = score(doc=701,freq=2.0), product of:
                0.35944527 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042397358 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.02822207 = weight(_text_:web in 701) [ClassicSimilarity], result of:
          0.02822207 = score(doc=701,freq=4.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.2039694 = fieldWeight in 701, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.06414963 = weight(_text_:retrieval in 701) [ClassicSimilarity], result of:
          0.06414963 = score(doc=701,freq=28.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.5001983 = fieldWeight in 701, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.010731019 = product of:
          0.032193057 = sum of:
            0.032193057 = weight(_text_:system in 701) [ClassicSimilarity], result of:
              0.032193057 = score(doc=701,freq=6.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.24108742 = fieldWeight in 701, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
      0.6666667 = coord(4/6)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
    Theme
    Semantic Web
  3. Hüsken, P.: Information Retrieval im Semantic Web (2006) 0.08
    0.07924003 = product of:
      0.15848006 = sum of:
        0.055176124 = weight(_text_:wide in 4333) [ClassicSimilarity], result of:
          0.055176124 = score(doc=4333,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.29372054 = fieldWeight in 4333, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.06693452 = weight(_text_:web in 4333) [ClassicSimilarity], result of:
          0.06693452 = score(doc=4333,freq=10.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.48375595 = fieldWeight in 4333, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.036369424 = weight(_text_:retrieval in 4333) [ClassicSimilarity], result of:
          0.036369424 = score(doc=4333,freq=4.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.2835858 = fieldWeight in 4333, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
      0.5 = coord(3/6)
    
    Abstract
    Das Semantic Web bezeichnet ein erweitertes World Wide Web (WWW), das die Bedeutung von präsentierten Inhalten in neuen standardisierten Sprachen wie RDF Schema und OWL modelliert. Diese Arbeit befasst sich mit dem Aspekt des Information Retrieval, d.h. es wird untersucht, in wie weit Methoden der Informationssuche sich auf modelliertes Wissen übertragen lassen. Die kennzeichnenden Merkmale von IR-Systemen wie vage Anfragen sowie die Unterstützung unsicheren Wissens werden im Kontext des Semantic Web behandelt. Im Fokus steht die Suche nach Fakten innerhalb einer Wissensdomäne, die entweder explizit modelliert sind oder implizit durch die Anwendung von Inferenz abgeleitet werden können. Aufbauend auf der an der Universität Duisburg-Essen entwickelten Retrievalmaschine PIRE wird die Anwendung unsicherer Inferenz mit probabilistischer Prädikatenlogik (pDatalog) implementiert.
    Theme
    Semantic Web
  4. Tzitzikas, Y.: Collaborative ontology-based information indexing and retrieval (2002) 0.08
    0.07627682 = product of:
      0.11441522 = sum of:
        0.036784086 = weight(_text_:wide in 2281) [ClassicSimilarity], result of:
          0.036784086 = score(doc=2281,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.1958137 = fieldWeight in 2281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.04462301 = weight(_text_:web in 2281) [ClassicSimilarity], result of:
          0.04462301 = score(doc=2281,freq=10.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.32250395 = fieldWeight in 2281, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.024246283 = weight(_text_:retrieval in 2281) [ClassicSimilarity], result of:
          0.024246283 = score(doc=2281,freq=4.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.18905719 = fieldWeight in 2281, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2281)
        0.008761841 = product of:
          0.026285522 = sum of:
            0.026285522 = weight(_text_:system in 2281) [ClassicSimilarity], result of:
              0.026285522 = score(doc=2281,freq=4.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.19684705 = fieldWeight in 2281, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2281)
          0.33333334 = coord(1/3)
      0.6666667 = coord(4/6)
    
    Abstract
    An information system like the Web is a continuously evolving system consisting of multiple heterogeneous information sources, covering a wide domain of discourse, and a huge number of users (human or software) with diverse characteristics and needs, that produce and consume information. The challenge nowadays is to build a scalable information infrastructure enabling the effective, accurate, content based retrieval of information, in a way that adapts to the characteristics and interests of the users. The aim of this work is to propose formally sound methods for building such an information network based on ontologies which are widely used and are easy to grasp by ordinary Web users. The main results of this work are: - A novel scheme for indexing and retrieving objects according to multiple aspects or facets. The proposed scheme is a faceted scheme enriched with a method for specifying the combinations of terms that are valid. We give a model-theoretic interpretation to this model and we provide mechanisms for inferring the valid combinations of terms. This inference service can be exploited for preventing errors during the indexing process, which is very important especially in the case where the indexing is done collaboratively by many users, and for deriving "complete" navigation trees suitable for browsing through the Web. The proposed scheme has several advantages over the hierarchical classification schemes currently employed by Web catalogs, namely, conceptual clarity (it is easier to understand), compactness (it takes less space), and scalability (the update operations can be formulated more easily and be performed more effciently). - A exible and effecient model for building mediators over ontology based information sources. The proposed mediators support several modes of query translation and evaluation which can accommodate various application needs and levels of answer quality. The proposed model can be used for providing users with customized views of Web catalogs. It can also complement the techniques for building mediators over relational sources so as to support approximate translation of partially ordered domain values.
  5. Glockner, M.: Semantik Web : Die nächste Generation des World Wide Web (2004) 0.08
    0.07584052 = product of:
      0.22752154 = sum of:
        0.1287443 = weight(_text_:wide in 4532) [ClassicSimilarity], result of:
          0.1287443 = score(doc=4532,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.685348 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.109375 = fieldNorm(doc=4532)
        0.09877724 = weight(_text_:web in 4532) [ClassicSimilarity], result of:
          0.09877724 = score(doc=4532,freq=4.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.71389294 = fieldWeight in 4532, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=4532)
      0.33333334 = coord(2/6)
    
  6. Schiefer, J.: Aufbau eines internationalen CDS/ISIS Nutzerforums im World Wide Web (1996) 0.08
    0.07565347 = product of:
      0.2269604 = sum of:
        0.14713635 = weight(_text_:wide in 5961) [ClassicSimilarity], result of:
          0.14713635 = score(doc=5961,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.7832548 = fieldWeight in 5961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.125 = fieldNorm(doc=5961)
        0.07982406 = weight(_text_:web in 5961) [ClassicSimilarity], result of:
          0.07982406 = score(doc=5961,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.5769126 = fieldWeight in 5961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.125 = fieldNorm(doc=5961)
      0.33333334 = coord(2/6)
    
  7. Li, Z.: ¬A domain specific search engine with explicit document relations (2013) 0.07
    0.065883726 = product of:
      0.13176745 = sum of:
        0.045980107 = weight(_text_:wide in 1210) [ClassicSimilarity], result of:
          0.045980107 = score(doc=1210,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.24476713 = fieldWeight in 1210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
        0.074835055 = weight(_text_:web in 1210) [ClassicSimilarity], result of:
          0.074835055 = score(doc=1210,freq=18.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.5408555 = fieldWeight in 1210, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
        0.010952301 = product of:
          0.032856904 = sum of:
            0.032856904 = weight(_text_:system in 1210) [ClassicSimilarity], result of:
              0.032856904 = score(doc=1210,freq=4.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.24605882 = fieldWeight in 1210, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1210)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    The current web consists of documents that are highly heterogeneous and hard for machines to understand. The Semantic Web is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.
    Theme
    Semantic Web
  8. Krüger, C.: Evaluation des WWW-Suchdienstes GERHARD unter besonderer Beachtung automatischer Indexierung (1999) 0.07
    0.06530557 = product of:
      0.13061114 = sum of:
        0.065025695 = weight(_text_:wide in 1777) [ClassicSimilarity], result of:
          0.065025695 = score(doc=1777,freq=4.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.34615302 = fieldWeight in 1777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
        0.035277586 = weight(_text_:web in 1777) [ClassicSimilarity], result of:
          0.035277586 = score(doc=1777,freq=4.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.25496176 = fieldWeight in 1777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
        0.030307854 = weight(_text_:retrieval in 1777) [ClassicSimilarity], result of:
          0.030307854 = score(doc=1777,freq=4.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.23632148 = fieldWeight in 1777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
      0.5 = coord(3/6)
    
    Abstract
    Die vorliegende Arbeit beinhaltet eine Beschreibung und Evaluation des WWW - Suchdienstes GERHARD (German Harvest Automated Retrieval and Directory). GERHARD ist ein Such- und Navigationssystem für das deutsche World Wide Web, weiches ausschließlich wissenschaftlich relevante Dokumente sammelt, und diese auf der Basis computerlinguistischer und statistischer Methoden automatisch mit Hilfe eines bibliothekarischen Klassifikationssystems klassifiziert. Mit dem DFG - Projekt GERHARD ist der Versuch unternommen worden, mit einem auf einem automatischen Klassifizierungsverfahren basierenden World Wide Web - Dienst eine Alternative zu herkömmlichen Methoden der Interneterschließung zu entwickeln. GERHARD ist im deutschsprachigen Raum das einzige Verzeichnis von Internetressourcen, dessen Erstellung und Aktualisierung vollständig automatisch (also maschinell) erfolgt. GERHARD beschränkt sich dabei auf den Nachweis von Dokumenten auf wissenschaftlichen WWW - Servern. Die Grundidee dabei war, kostenintensive intellektuelle Erschließung und Klassifizierung von lnternetseiten durch computerlinguistische und statistische Methoden zu ersetzen, um auf diese Weise die nachgewiesenen Internetressourcen automatisch auf das Vokabular eines bibliothekarischen Klassifikationssystems abzubilden. GERHARD steht für German Harvest Automated Retrieval and Directory. Die WWW - Adresse (URL) von GERHARD lautet: http://www.gerhard.de. Im Rahmen der vorliegenden Diplomarbeit soll eine Beschreibung des Dienstes mit besonderem Schwerpunkt auf dem zugrundeliegenden Indexierungs- bzw. Klassifizierungssystem erfolgen und anschließend mit Hilfe eines kleinen Retrievaltests die Effektivität von GERHARD überprüft werden.
  9. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.06
    0.0635744 = product of:
      0.1271488 = sum of:
        0.059868045 = weight(_text_:web in 563) [ClassicSimilarity], result of:
          0.059868045 = score(doc=563,freq=8.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.43268442 = fieldWeight in 563, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.025717068 = weight(_text_:retrieval in 563) [ClassicSimilarity], result of:
          0.025717068 = score(doc=563,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.20052543 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.041563697 = product of:
          0.062345542 = sum of:
            0.027880006 = weight(_text_:system in 563) [ClassicSimilarity], result of:
              0.027880006 = score(doc=563,freq=2.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.20878783 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
            0.034465536 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.034465536 = score(doc=563,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.6666667 = coord(2/3)
      0.5 = coord(3/6)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Date
    10. 1.2013 19:22:47
  10. Hannech, A.: Système de recherche d'information étendue basé sur une projection multi-espaces (2018) 0.06
    0.06197285 = product of:
      0.09295927 = sum of:
        0.026010277 = weight(_text_:wide in 4472) [ClassicSimilarity], result of:
          0.026010277 = score(doc=4472,freq=4.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.1384612 = fieldWeight in 4472, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
        0.02822207 = weight(_text_:web in 4472) [ClassicSimilarity], result of:
          0.02822207 = score(doc=4472,freq=16.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.2039694 = fieldWeight in 4472, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
        0.017144712 = weight(_text_:retrieval in 4472) [ClassicSimilarity], result of:
          0.017144712 = score(doc=4472,freq=8.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.13368362 = fieldWeight in 4472, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
        0.02158222 = product of:
          0.032373328 = sum of:
            0.02078053 = weight(_text_:system in 4472) [ClassicSimilarity], result of:
              0.02078053 = score(doc=4472,freq=10.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.15562126 = fieldWeight in 4472, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4472)
            0.011592798 = weight(_text_:29 in 4472) [ClassicSimilarity], result of:
              0.011592798 = score(doc=4472,freq=2.0), product of:
                0.14914064 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042397358 = queryNorm
                0.07773064 = fieldWeight in 4472, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4472)
          0.6666667 = coord(2/3)
      0.6666667 = coord(4/6)
    
    Abstract
    Depuis son apparition au début des années 90, le World Wide Web (WWW ou Web) a offert un accès universel aux connaissances et le monde de l'information a été principalement témoin d'une grande révolution (la révolution numérique). Il est devenu rapidement très populaire, ce qui a fait de lui la plus grande et vaste base de données et de connaissances existantes grâce à la quantité et la diversité des données qu'il contient. Cependant, l'augmentation et l'évolution considérables de ces données soulèvent d'importants problèmes pour les utilisateurs notamment pour l'accès aux documents les plus pertinents à leurs requêtes de recherche. Afin de faire face à cette explosion exponentielle du volume de données et faciliter leur accès par les utilisateurs, différents modèles sont proposés par les systèmes de recherche d'information (SRIs) pour la représentation et la recherche des documents web. Les SRIs traditionnels utilisent, pour indexer et récupérer ces documents, des mots-clés simples qui ne sont pas sémantiquement liés. Cela engendre des limites en termes de la pertinence et de la facilité d'exploration des résultats. Pour surmonter ces limites, les techniques existantes enrichissent les documents en intégrant des mots-clés externes provenant de différentes sources. Cependant, ces systèmes souffrent encore de limitations qui sont liées aux techniques d'exploitation de ces sources d'enrichissement. Lorsque les différentes sources sont utilisées de telle sorte qu'elles ne peuvent être distinguées par le système, cela limite la flexibilité des modèles d'exploration qui peuvent être appliqués aux résultats de recherche retournés par ce système. Les utilisateurs se sentent alors perdus devant ces résultats, et se retrouvent dans l'obligation de les filtrer manuellement pour sélectionner l'information pertinente. S'ils veulent aller plus loin, ils doivent reformuler et cibler encore plus leurs requêtes de recherche jusqu'à parvenir aux documents qui répondent le mieux à leurs attentes. De cette façon, même si les systèmes parviennent à retrouver davantage des résultats pertinents, leur présentation reste problématique. Afin de cibler la recherche à des besoins d'information plus spécifiques de l'utilisateur et améliorer la pertinence et l'exploration de ses résultats de recherche, les SRIs avancés adoptent différentes techniques de personnalisation de données qui supposent que la recherche actuelle d'un utilisateur est directement liée à son profil et/ou à ses expériences de navigation/recherche antérieures. Cependant, cette hypothèse ne tient pas dans tous les cas, les besoins de l'utilisateur évoluent au fil du temps et peuvent s'éloigner de ses intérêts antérieurs stockés dans son profil.
    Dans d'autres cas, le profil de l'utilisateur peut être mal exploité pour extraire ou inférer ses nouveaux besoins en information. Ce problème est beaucoup plus accentué avec les requêtes ambigües. Lorsque plusieurs centres d'intérêt auxquels est liée une requête ambiguë sont identifiés dans le profil de l'utilisateur, le système se voit incapable de sélectionner les données pertinentes depuis ce profil pour répondre à la requête. Ceci a un impact direct sur la qualité des résultats fournis à cet utilisateur. Afin de remédier à quelques-unes de ces limitations, nous nous sommes intéressés dans ce cadre de cette thèse de recherche au développement de techniques destinées principalement à l'amélioration de la pertinence des résultats des SRIs actuels et à faciliter l'exploration de grandes collections de documents. Pour ce faire, nous proposons une solution basée sur un nouveau concept d'indexation et de recherche d'information appelé la projection multi-espaces. Cette proposition repose sur l'exploitation de différentes catégories d'information sémantiques et sociales qui permettent d'enrichir l'univers de représentation des documents et des requêtes de recherche en plusieurs dimensions d'interprétations. L'originalité de cette représentation est de pouvoir distinguer entre les différentes interprétations utilisées pour la description et la recherche des documents. Ceci donne une meilleure visibilité sur les résultats retournés et aide à apporter une meilleure flexibilité de recherche et d'exploration, en donnant à l'utilisateur la possibilité de naviguer une ou plusieurs vues de données qui l'intéressent le plus. En outre, les univers multidimensionnels de représentation proposés pour la description des documents et l'interprétation des requêtes de recherche aident à améliorer la pertinence des résultats de l'utilisateur en offrant une diversité de recherche/exploration qui aide à répondre à ses différents besoins et à ceux des autres différents utilisateurs. Cette étude exploite différents aspects liés à la recherche personnalisée et vise à résoudre les problèmes engendrés par l'évolution des besoins en information de l'utilisateur. Ainsi, lorsque le profil de cet utilisateur est utilisé par notre système, une technique est proposée et employée pour identifier les intérêts les plus représentatifs de ses besoins actuels dans son profil. Cette technique se base sur la combinaison de trois facteurs influents, notamment le facteur contextuel, fréquentiel et temporel des données. La capacité des utilisateurs à interagir, à échanger des idées et d'opinions, et à former des réseaux sociaux sur le Web, a amené les systèmes à s'intéresser aux types d'interactions de ces utilisateurs, au niveau d'interaction entre eux ainsi qu'à leurs rôles sociaux dans le système. Ces informations sociales sont abordées et intégrées dans ce travail de recherche. L'impact et la manière de leur intégration dans le processus de RI sont étudiés pour améliorer la pertinence des résultats.
    Since its appearance in the early 90's, the World Wide Web (WWW or Web) has provided universal access to knowledge and the world of information has been primarily witness to a great revolution (the digital revolution). It quickly became very popular, making it the largest and most comprehensive database and knowledge base thanks to the amount and diversity of data it contains. However, the considerable increase and evolution of these data raises important problems for users, in particular for accessing the documents most relevant to their search queries. In order to cope with this exponential explosion of data volume and facilitate their access by users, various models are offered by information retrieval systems (IRS) for the representation and retrieval of web documents. Traditional SRIs use simple keywords that are not semantically linked to index and retrieve these documents. This creates limitations in terms of the relevance and ease of exploration of results. To overcome these limitations, existing techniques enrich documents by integrating external keywords from different sources. However, these systems still suffer from limitations that are related to the exploitation techniques of these sources of enrichment. When the different sources are used so that they cannot be distinguished by the system, this limits the flexibility of the exploration models that can be applied to the results returned by this system. Users then feel lost to these results, and find themselves forced to filter them manually to select the relevant information. If they want to go further, they must reformulate and target their search queries even more until they reach the documents that best meet their expectations. In this way, even if the systems manage to find more relevant results, their presentation remains problematic. In order to target research to more user-specific information needs and improve the relevance and exploration of its research findings, advanced SRIs adopt different data personalization techniques that assume that current research of user is directly related to his profile and / or previous browsing / search experiences.
    However, this assumption does not hold in all cases, the needs of the user evolve over time and can move away from his previous interests stored in his profile. In other cases, the user's profile may be misused to extract or infer new information needs. This problem is much more accentuated with ambiguous queries. When multiple POIs linked to a search query are identified in the user's profile, the system is unable to select the relevant data from that profile to respond to that request. This has a direct impact on the quality of the results provided to this user. In order to overcome some of these limitations, in this research thesis, we have been interested in the development of techniques aimed mainly at improving the relevance of the results of current SRIs and facilitating the exploration of major collections of documents. To do this, we propose a solution based on a new concept and model of indexing and information retrieval called multi-spaces projection. This proposal is based on the exploitation of different categories of semantic and social information that enrich the universe of document representation and search queries in several dimensions of interpretations. The originality of this representation is to be able to distinguish between the different interpretations used for the description and the search for documents. This gives a better visibility on the results returned and helps to provide a greater flexibility of search and exploration, giving the user the ability to navigate one or more views of data that interest him the most. In addition, the proposed multidimensional representation universes for document description and search query interpretation help to improve the relevance of the user's results by providing a diversity of research / exploration that helps meet his diverse needs and those of other different users. This study exploits different aspects that are related to the personalized search and aims to solve the problems caused by the evolution of the information needs of the user. Thus, when the profile of this user is used by our system, a technique is proposed and used to identify the interests most representative of his current needs in his profile. This technique is based on the combination of three influential factors, including the contextual, frequency and temporal factor of the data. The ability of users to interact, exchange ideas and opinions, and form social networks on the Web, has led systems to focus on the types of interactions these users have at the level of interaction between them as well as their social roles in the system. This social information is discussed and integrated into this research work. The impact and how they are integrated into the IR process are studied to improve the relevance of the results.
    Date
    29. 9.2018 18:57:38
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  11. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.06
    0.058311846 = product of:
      0.08746777 = sum of:
        0.022990054 = weight(_text_:wide in 4232) [ClassicSimilarity], result of:
          0.022990054 = score(doc=4232,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.122383565 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.04989004 = weight(_text_:web in 4232) [ClassicSimilarity], result of:
          0.04989004 = score(doc=4232,freq=32.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.36057037 = fieldWeight in 4232, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.010715445 = weight(_text_:retrieval in 4232) [ClassicSimilarity], result of:
          0.010715445 = score(doc=4232,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.08355226 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.003872223 = product of:
          0.011616669 = sum of:
            0.011616669 = weight(_text_:system in 4232) [ClassicSimilarity], result of:
              0.011616669 = score(doc=4232,freq=2.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.08699492 = fieldWeight in 4232, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4232)
          0.33333334 = coord(1/3)
      0.6666667 = coord(4/6)
    
    Abstract
    After the launch of the World Wide Web, it became clear that searching documentson the Web would not be trivial. Well-known engines to search the web, like Google, focus on search in web documents using keywords. The documents are structured and indexed to ensure keywords match documents as accurately as possible. However, searching by keywords does not always suice. It is oen the case that users do not know exactly how to formulate the search query or which keywords guarantee retrieving the most relevant documents. Besides that, it occurs that users rather want to browse information than looking up something specific. It turned out that there is need for systems that enable more interactivity and facilitate the gradual refinement of search queries to explore the Web. Users expect more from the Web because the short keyword-based queries they pose during search, do not suffice for all cases. On top of that, the Web is changing structurally. The Web comprises, apart from a collection of documents, more and more linked data, pieces of information structured so they can be processed by machines. The consequently applied semantics allow users to exactly indicate machines their search intentions. This is made possible by describing data following controlled vocabularies, concept lists composed by experts, published uniquely identifiable on the Web. Even so, it is still not trivial to explore data on the Web. There is a large variety of vocabularies and various data sources use different terms to identify the same concepts.
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. eries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research.
    When we speak about finding relationships between resources, it is necessary to dive deeper in the structure. The graph structure of linked data where the semantics give meaning to the relationships between resources enable the execution of pathfinding algorithms. The assigned weights and heuristics are base components of such algorithms and ultimately define (the order) which resources are included in a path. These paths explain indirect connections between resources. Our third technique proposes an algorithm that optimizes the choice of resources in terms of serendipity. Some optimizations guard the consistence of candidate-paths where the coherence of consecutive connections is maximized to avoid trivial and too arbitrary paths. The implementation uses the A* algorithm, the de-facto reference when it comes to heuristically optimized minimal cost paths. The effectiveness of paths was measured based on common automatic metrics and surveys where the users could indicate their preference for paths, generated each time in a different way. Finally, all our techniques are applied to a use case about publications in digital libraries where they are aligned with information about scientific conferences and researchers. The application to this use case is a practical example because the different aspects of exploratory search come together. In fact, the techniques also evolved from the experiences when implementing the use case. Practical details about the semantic model are explained and the implementation of the search system is clarified module by module. The evaluation positions the result, a prototype of a tool to explore scientific publications, researchers and conferences next to some important alternatives.
    Theme
    Semantic Web
  12. Timm, A.: Fachinformation in den Bereichen Gentechnologie und Molekularbiologie am Beispiel ausgewählter Datenbanken und Dienstleistungen im World Wide Web (1996) 0.06
    0.056740098 = product of:
      0.17022029 = sum of:
        0.11035225 = weight(_text_:wide in 785) [ClassicSimilarity], result of:
          0.11035225 = score(doc=785,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.5874411 = fieldWeight in 785, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.09375 = fieldNorm(doc=785)
        0.059868045 = weight(_text_:web in 785) [ClassicSimilarity], result of:
          0.059868045 = score(doc=785,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.43268442 = fieldWeight in 785, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=785)
      0.33333334 = coord(2/6)
    
  13. Artemenko, O.; Shramko, M.: Entwicklung eines Werkzeugs zur Sprachidentifikation in mono- und multilingualen Texten (2005) 0.06
    0.056002658 = product of:
      0.084003985 = sum of:
        0.032186076 = weight(_text_:wide in 572) [ClassicSimilarity], result of:
          0.032186076 = score(doc=572,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.171337 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.02469431 = weight(_text_:web in 572) [ClassicSimilarity], result of:
          0.02469431 = score(doc=572,freq=4.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.17847323 = fieldWeight in 572, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.015001623 = weight(_text_:retrieval in 572) [ClassicSimilarity], result of:
          0.015001623 = score(doc=572,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.11697317 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.012121975 = product of:
          0.036365926 = sum of:
            0.036365926 = weight(_text_:system in 572) [ClassicSimilarity], result of:
              0.036365926 = score(doc=572,freq=10.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.2723372 = fieldWeight in 572, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=572)
          0.33333334 = coord(1/3)
      0.6666667 = coord(4/6)
    
    Abstract
    Identifikation der Sprache bzw. Sprachen elektronischer Textdokumente ist einer der wichtigsten Schritte in vielen Prozessen maschineller Textverarbeitung. Die vorliegende Arbeit stellt LangIdent, ein System zur Sprachidentifikation von mono- und multilingualen elektronischen Textdokumenten vor. Das System bietet sowohl eine Auswahl von gängigen Algorithmen für die Sprachidentifikation monolingualer Textdokumente als auch einen neuen Algorithmus für die Sprachidentifikation multilingualer Textdokumente.
    Mit der Verbreitung des Internets vermehrt sich die Menge der im World Wide Web verfügbaren Dokumente. Die Gewährleistung eines effizienten Zugangs zu gewünschten Informationen für die Internetbenutzer wird zu einer großen Herausforderung an die moderne Informationsgesellschaft. Eine Vielzahl von Werkzeugen wird bereits eingesetzt, um den Nutzern die Orientierung in der wachsenden Informationsflut zu erleichtern. Allerdings stellt die enorme Menge an unstrukturierten und verteilten Informationen nicht die einzige Schwierigkeit dar, die bei der Entwicklung von Werkzeugen dieser Art zu bewältigen ist. Die zunehmende Vielsprachigkeit von Web-Inhalten resultiert in dem Bedarf an Sprachidentifikations-Software, die Sprache/en von elektronischen Dokumenten zwecks gezielter Weiterverarbeitung identifiziert. Solche Sprachidentifizierer können beispielsweise effektiv im Bereich des Multilingualen Information Retrieval eingesetzt werden, da auf den Sprachidentifikationsergebnissen Prozesse der automatischen Indexbildung wie Stemming, Stoppwörterextraktion etc. aufbauen. In der vorliegenden Arbeit wird das neue System "LangIdent" zur Sprachidentifikation von elektronischen Textdokumenten vorgestellt, das in erster Linie für Lehre und Forschung an der Universität Hildesheim verwendet werden soll. "LangIdent" enthält eine Auswahl von gängigen Algorithmen zu der monolingualen Sprachidentifikation, die durch den Benutzer interaktiv ausgewählt und eingestellt werden können. Zusätzlich wurde im System ein neuer Algorithmus implementiert, der die Identifikation von Sprachen, in denen ein multilinguales Dokument verfasst ist, ermöglicht. Die Identifikation beschränkt sich nicht nur auf eine Aufzählung von gefundenen Sprachen, vielmehr wird der Text in monolinguale Abschnitte aufgeteilt, jeweils mit der Angabe der identifizierten Sprache.
    Die Arbeit wird in zwei Hauptteile gegliedert. Der erste Teil besteht aus Kapiteln 1-5, in denen theoretische Grundlagen zum Thema Sprachidentifikation dargelegt werden. Das erste Kapitel beschreibt den Sprachidentifikationsprozess und definiert grundlegende Begriffe. Im zweiten und dritten Kapitel werden vorherrschende Ansätze zur Sprachidentifikation von monolingualen Dokumenten dargestellt und miteinander verglichen, indem deren Vor- und Nachteile diskutiert werden. Das vierte Kapitel stellt einige Arbeiten vor, die sich mit der Sprachidentifikation von multilingualen Texten befasst haben. Der erste Teil der Arbeit wird mit einem Überblick über die bereits entwickelten und im Internet verfügbaren Sprachidentifikationswerkzeuge abgeschlossen. Der zweite Teil der Arbeit stellt die Entwicklung des Sprachidentifikationssystems LangIdent dar. In den Kapiteln 6 und 7 werden die an das System gestellten Anforderungen zusammengefasst und die wichtigsten Phasen des Projekts definiert. In den weiterführenden Kapiteln 8 und 9 werden die Systemarchitektur und eine detaillierte Beschreibung ihrer Kernkomponenten gegeben. Das Kapitel 10 liefert ein statisches UML-Klassendiagramm mit einer ausführlichen Erklärung von Attributen und Methoden der im Diagramm vorgestellten Klassen. Das nächste Kapitel befasst sich mit den im Prozess der Systementwicklung aufgetretenen Problemen. Die Bedienung des Programms wird im Kapitel 12 beschrieben. Im letzten Kapitel der Arbeit wird die Systemevaluierung vorgestellt, in der der Aufbau und Umfang von Trainingskorpora sowie die wichtigsten Ergebnisse mit der anschließenden Diskussion präsentiert werden.
  14. Griesbaum, J.: Evaluierung hybrider Suchsysteme im WWW (2000) 0.06
    0.055413604 = product of:
      0.11082721 = sum of:
        0.055176124 = weight(_text_:wide in 2482) [ClassicSimilarity], result of:
          0.055176124 = score(doc=2482,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.29372054 = fieldWeight in 2482, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2482)
        0.029934023 = weight(_text_:web in 2482) [ClassicSimilarity], result of:
          0.029934023 = score(doc=2482,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.21634221 = fieldWeight in 2482, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2482)
        0.025717068 = weight(_text_:retrieval in 2482) [ClassicSimilarity], result of:
          0.025717068 = score(doc=2482,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.20052543 = fieldWeight in 2482, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2482)
      0.5 = coord(3/6)
    
    Abstract
    Der Ausgangspunkt dieser Arbeit ist die Suchproblematik im World Wide Web. Suchmaschinen sind einerseits unverzichtbar für erfolgreiches Information Retrieval, andererseits wird ihnen eine mäßige Leistungsfähigkeit vorgeworfen. Das Thema dieser Arbeit ist die Untersuchung der Retrievaleffektivität deutschsprachiger Suchmaschinen. Es soll festgestellt werden, welche Retrievaleffektivität Nutzer derzeit erwarten können. Ein Ansatz, um die Retrievaleffektivität von Suchmaschinen zu erhöhen besteht darin, redaktionell von Menschen erstellte und automatisch generierte Suchergebnisse in einer Trefferliste zu vermengen. Ziel dieser Arbeit ist es, die Retrievaleffektivität solcher hybrider Systeme im Vergleich zu rein roboterbasierten Suchmaschinen zu evaluieren. Zunächst werden hierzu die grundlegenden Problembereiche bei der Evaluation von Retrievalsystemen analysiert. In Anlehnung an die von Tague-Sutcliff vorgeschlagene Methodik wird unter Beachtung der webspezifischen Besonderheiten eine mögliche Vorgehensweise erschlossen. Darauf aufbauend wird das konkrete Setting für die Durchführung der Evaluation erarbeitet und ein Retrievaleffektivitätstest bei den Suchmaschinen Lycos.de, AItaVista.de und QualiGo durchgeführt.
  15. Nagelschmidt, M.: Integration und Anwendung von "Semantic Web"-Technologien im betrieblichen Wissensmanagement (2012) 0.06
    0.055308517 = product of:
      0.110617034 = sum of:
        0.045980107 = weight(_text_:wide in 11) [ClassicSimilarity], result of:
          0.045980107 = score(doc=11,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.24476713 = fieldWeight in 11, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=11)
        0.043206044 = weight(_text_:web in 11) [ClassicSimilarity], result of:
          0.043206044 = score(doc=11,freq=6.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.3122631 = fieldWeight in 11, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=11)
        0.02143089 = weight(_text_:retrieval in 11) [ClassicSimilarity], result of:
          0.02143089 = score(doc=11,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.16710453 = fieldWeight in 11, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=11)
      0.5 = coord(3/6)
    
    Abstract
    Das Wissensmanagement ist ein Themenkomplex mit zahlreichen fachlichen Bezügen, insbesondere zur Wirtschaftsinformatik und der Management-, Personal- und Organisationslehre als Teilbereiche der Betriebswirtschaftslehre. In einem weiter gefassten Verständnis bestehen aber auch Bezüge zur Organisationspsychologie, zur Informatik und zur Informationswissenschaft. Von den Entwicklungen in diesen Bezugsdisziplinen können deshalb auch Impulse für die Konzepte, Methodiken und Technologien des Wissensmanagements ausgehen. Die aus der Informatik stammende Idee, das World Wide Web (WWW) zu einem semantischen Netz auszubauen, kann als eine solche impulsgebende Entwicklung gesehen werden. Im Verlauf der vergangenen Dekade hat diese Idee einen hinreichenden Reifegrad erreicht, so dass eine potenzielle Relevanz auch für das Wissensmanagement unterstellt werden darf. Im Rahmen dieser Arbeit soll anhand eines konkreten, konzeptionellen Ansatzes demonstriert werden, wie dieser technologische Impuls für das Wissensmanagement nutzenbringend kanalisiert werden kann. Ein derartiges Erkenntnisinteresse erfordert zunächst die Erarbeitung eines operationalen Verständnisses von Wissensmanagement, auf dem die weiteren Betrachtungen aufbauen können. Es werden außerdem die Architektur und die Funktionsweise eines "Semantic Web" sowie XML und die Ontologiesprachen RDF/RDFS und OWL als maßgebliche Werkzeuge für eine ontologiebasierte Wissensrepräsentation eingeführt. Anschließend wird zur Integration und Anwendung dieser semantischen Technologien in das Wissensmanagement ein Ansatz vorgestellt, der eine weitgehend automatisierte Wissensmodellierung und daran anschließende, semantische Informationserschließung der betrieblichen Datenbasis beschreibt. Zur Veranschaulichung wird dazu auf eine fiktive Beispielwelt aus der Fertigungsindustrie zurückgegriffen. Schließlich soll der Nutzen dieser Vorgehensweise durch Anwendungsszenarien des Information Retrieval (IR) im Kontext von Geschäftsprozessen illustriert werden.
  16. Kaluza, H.: Methoden und Verfahren bei der Archivierung von Internetressourcen : "The Internet Archive" und PANDORA (2002) 0.05
    0.05186505 = product of:
      0.1037301 = sum of:
        0.052020553 = weight(_text_:wide in 973) [ClassicSimilarity], result of:
          0.052020553 = score(doc=973,freq=4.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.2769224 = fieldWeight in 973, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=973)
        0.034564834 = weight(_text_:web in 973) [ClassicSimilarity], result of:
          0.034564834 = score(doc=973,freq=6.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.24981049 = fieldWeight in 973, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=973)
        0.017144712 = weight(_text_:retrieval in 973) [ClassicSimilarity], result of:
          0.017144712 = score(doc=973,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.13368362 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=973)
      0.5 = coord(3/6)
    
    Content
    "Die vorliegende Arbeit befasst sich mit den Methoden und Verfahren bei der Archivierung von Internetressourcen. Ziel ist es, anhand einer vergleichenden Beschreibung zweier zur Zeit aktiver, bzw. im Aufbau befindlicher Projekte, die Grundprobleme dieser speziellen Art der Archivierung darzustellen und deren unterschiedliche Vorgehensweisen beim Aufbau des Archivs zu beschreiben und zu vergleichen. Daraus erfolgt eine Diskussion über grundsätzliche Fragestellungen zu diesem Thema. Hierzu ist es vonnöten, zuerst auf das besondere Medium Internet, insbesondere auf das World Wide Web (WWW), einzugehen, sowie dessen Geschichte und Entstehung zu betrachten. Weiterhin soll ein besonderes Augenmerk auf die Datenmenge, die Datenstruktur und die Datentypen (hier vor allem im World Wide Web) gelegt werden. Da die daraus entstehenden Probleme für Erschließung und Retrieval, die Qualität und die Fluktuation der Angebote im Web eine wichtige Rolle im Rahmen der Archivierung von Internetressourcen darstellen, werden diese gesondert mittels kurzer Beschreibungen bestimmter Instrumente und Projekte zur Lösung derselben beschrieben. Hier finden insbesondere Suchmaschinen und Webkataloge, deren Arbeitsweise und Aufbau besondere Beachtung. Weiterhin sollen die "Virtuelle Bibliothek" und das "Dublin Core"- Projekt erläutert werden. Auf dieser Basis wird dann speziell auf das allgemeine Thema der Archivierung von Internetressourcen eingegangen. Ihre Grundgedanken und ihre Ziele sollen beschrieben und erste Diskussionsfragen und Diskrepanzen aufgezeigt werden. Ein besonderes Augenmerk gilt hier vor allem den technischen und rechtlichen Problemen, sowie Fragen des Jugendschutzes und der Zugänglichkeit zu mittlerweile verbotenen Inhalten. Einzelne Methoden der Archivierung, die vor allem im folgenden Teil anhand von Beispielen Beachtung finden, werden kurz vorgestellt. Im darauf folgenden Teil werden zwei Archivierungsprojekte detailliert beschrieben und analysiert. Einem einführenden Überblick über das jeweilige Projekt, folgen detaillierte Beschreibungen zu Projektverlauf, Philosophie und Vorgehensweise. Die Datenbasis und das Angebot, sowie die Funktionalitäten werden einer genauen Untersuchung unterzogen. Stärken und Schwächen werden genannt, und wenn möglich, untereinander verglichen. Hier ist vor allem auch die Frage von Bedeutung, ob das Angebot a) den Ansprüchen und Zielsetzungen des Anbieters genügt, und ob es b) den allgemeinen Grundfragen der Archivierung von Internetressourcen gleichkommt, die in Kapitel 3 genannt worden sind. Auf Basis aller Teile soll dann abschließend der derzeitige Stand im Themengebiet diskutiert werden. Die Arbeit schließt mit einer endgültigen Bewertung und alternativen Lösungen."
  17. Líska, M.: Evaluation of mathematics retrieval (2013) 0.05
    0.0511117 = product of:
      0.1022234 = sum of:
        0.03492303 = weight(_text_:web in 1653) [ClassicSimilarity], result of:
          0.03492303 = score(doc=1653,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.25239927 = fieldWeight in 1653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1653)
        0.051967148 = weight(_text_:retrieval in 1653) [ClassicSimilarity], result of:
          0.051967148 = score(doc=1653,freq=6.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.40520695 = fieldWeight in 1653, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1653)
        0.01533322 = product of:
          0.04599966 = sum of:
            0.04599966 = weight(_text_:system in 1653) [ClassicSimilarity], result of:
              0.04599966 = score(doc=1653,freq=4.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.34448233 = fieldWeight in 1653, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1653)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    The thesis deals with the evaluation of mathematics information retrieval (IR). It gives an overview of the history of regular IR evaluation, initiatives that are engaged in this field of research as well as most common methods and measures used for evaluation. The findings are applied to the specifics of mathematics retrieval. This thesis also summarizes the state-of-the-art of MIaS math search system, which is already being used in an international web portal. Latest developments aiming towards the second version of the system are described. In addition to participating in the international evaluation conference and workshop, MIaS is tested for effectiveness and efficiency in this work. Measured performance indicators are evaluated and future work is suggested accordingly.
  18. Haveliwala, T.: Context-Sensitive Web search (2005) 0.05
    0.050781906 = product of:
      0.10156381 = sum of:
        0.06310646 = weight(_text_:web in 2567) [ClassicSimilarity], result of:
          0.06310646 = score(doc=2567,freq=20.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.45608947 = fieldWeight in 2567, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2567)
        0.029695513 = weight(_text_:retrieval in 2567) [ClassicSimilarity], result of:
          0.029695513 = score(doc=2567,freq=6.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.23154683 = fieldWeight in 2567, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2567)
        0.008761841 = product of:
          0.026285522 = sum of:
            0.026285522 = weight(_text_:system in 2567) [ClassicSimilarity], result of:
              0.026285522 = score(doc=2567,freq=4.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.19684705 = fieldWeight in 2567, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2567)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    As the Web continues to grow and encompass broader and more diverse sources of information, providing effective search facilities to users becomes an increasingly challenging problem. To help users deal with the deluge of Web-accessible information, we propose a search system which makes use of context to improve search results in a scalable way. By context, we mean any sources of information, in addition to any search query, that provide clues about the user's true information need. For instance, a user's bookmarks and search history can be considered a part of the search context. We consider two types of context-based search. The first type of functionality we consider is "similarity search." In this case, as the user is browsing Web pages, URLs for pages similar to the current page are retrieved and displayed in a side panel. No query is explicitly issued; context alone (i.e., the page currently being viewed) is used to provide the user with useful related information. The second type of functionality involves taking search context into account when ranking results to standard search queries. Web search differs from traditional information retrieval tasks in several major ways, making effective context-sensitive Web search challenging. First, scalability is of critical importance. With billions of publicly accessible documents, the Web is much larger than traditional datasets. Similarly, with millions of search queries issued each day, the query load is much higher than for traditional information retrieval systems. Second, there are no guarantees on the quality ofWeb pages, with Web-authors taking an adversarial, rather than cooperative, approach in attempts to inflate the rankings of their pages. Third, there is a significant amount of metadata embodied in the link structure corresponding to the hyperlinks between Web pages that can be exploitedduring the retrieval process. In this thesis, we design a search system, using the Stanford WebBase platform, that exploits the link structure of the Web to provide scalable, context-sensitive search.
  19. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.05
    0.049977418 = product of:
      0.099954836 = sum of:
        0.029934023 = weight(_text_:web in 3829) [ClassicSimilarity], result of:
          0.029934023 = score(doc=3829,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.21634221 = fieldWeight in 3829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.051434137 = weight(_text_:retrieval in 3829) [ClassicSimilarity], result of:
          0.051434137 = score(doc=3829,freq=8.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.40105087 = fieldWeight in 3829, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.018586671 = product of:
          0.05576001 = sum of:
            0.05576001 = weight(_text_:system in 3829) [ClassicSimilarity], result of:
              0.05576001 = score(doc=3829,freq=8.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.41757566 = fieldWeight in 3829, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
    Theme
    Semantic Web
  20. Müller, C.: Allegro im World Wide Web : Programierung eines Interfaces (1997) 0.05
    0.047283422 = product of:
      0.14185026 = sum of:
        0.091960214 = weight(_text_:wide in 1486) [ClassicSimilarity], result of:
          0.091960214 = score(doc=1486,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.48953426 = fieldWeight in 1486, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.078125 = fieldNorm(doc=1486)
        0.04989004 = weight(_text_:web in 1486) [ClassicSimilarity], result of:
          0.04989004 = score(doc=1486,freq=2.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.36057037 = fieldWeight in 1486, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=1486)
      0.33333334 = coord(2/6)
    

Languages

  • d 164
  • e 36
  • a 1
  • f 1
  • hu 1
  • pt 1
  • More… Less…

Types

Themes