Search (72 results, page 1 of 4)

  • × type_ss:"x"
  1. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.07
    0.068215966 = product of:
      0.13643193 = sum of:
        0.117351316 = weight(_text_:term in 563) [ClassicSimilarity], result of:
          0.117351316 = score(doc=563,freq=6.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.5357528 = fieldWeight in 563, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.019080611 = product of:
          0.038161222 = sum of:
            0.038161222 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.038161222 = score(doc=563,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Date
    10. 1.2013 19:22:47
  2. Nicoletti, M.: Automatische Indexierung (2001) 0.03
    0.033876404 = product of:
      0.13550562 = sum of:
        0.13550562 = weight(_text_:term in 4326) [ClassicSimilarity], result of:
          0.13550562 = score(doc=4326,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.618634 = fieldWeight in 4326, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.09375 = fieldNorm(doc=4326)
      0.25 = coord(1/4)
    
    Content
    Inhalt: 1. Aufgabe - 2. Ermittlung von Mehrwortgruppen - 2.1 Definition - 3. Kennzeichnung der Mehrwortgruppen - 4. Grundformen - 5. Term- und Dokumenthäufigkeit --- Termgewichtung - 6. Steuerungsinstrument Schwellenwert - 7. Invertierter Index. Vgl. unter: http://www.grin.com/de/e-book/104966/automatische-indexierung.
  3. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.03
    0.027959555 = product of:
      0.11183822 = sum of:
        0.11183822 = product of:
          0.4473529 = sum of:
            0.4473529 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.4473529 = score(doc=973,freq=2.0), product of:
                0.39798802 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04694356 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  4. Stock, W.G.: Wissenschaftliche Informationen - metawissenschaftlich betrachtet : eine Theorie der wissenschaftlichen Information (1980) 0.03
    0.027946608 = product of:
      0.11178643 = sum of:
        0.11178643 = weight(_text_:term in 182) [ClassicSimilarity], result of:
          0.11178643 = score(doc=182,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.510347 = fieldWeight in 182, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=182)
      0.25 = coord(1/4)
    
    Abstract
    Thema der Untersuchung ist die meta-wissenschaftliche Betrachtung von Informationen in den Wissenschaften. ... Der grundlegende Term "Information" wird so allgemein definiert, daß alle bis heute bekannten Definitionsvarianten (die oftmasl disziplinspezifisch ausgerichtet sind) aus diesem Term ableitbar sind. "Information" wird dabei als das Gesamt von "Signal" (materieller Aspekt) und "Informen" (ideeller Aspekt) betrachtet.
  5. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.02
    0.023903906 = product of:
      0.095615625 = sum of:
        0.095615625 = product of:
          0.19123125 = sum of:
            0.14911763 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.14911763 = score(doc=5820,freq=2.0), product of:
                0.39798802 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04694356 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
            0.042113617 = weight(_text_:based in 5820) [ClassicSimilarity], result of:
              0.042113617 = score(doc=5820,freq=10.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.2977476 = fieldWeight in 5820, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.5 = coord(2/4)
      0.25 = coord(1/4)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  6. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.023348149 = product of:
      0.093392596 = sum of:
        0.093392596 = product of:
          0.18678519 = sum of:
            0.14911763 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.14911763 = score(doc=701,freq=2.0), product of:
                0.39798802 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04694356 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
            0.037667565 = weight(_text_:based in 701) [ClassicSimilarity], result of:
              0.037667565 = score(doc=701,freq=8.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.26631355 = fieldWeight in 701, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(2/4)
      0.25 = coord(1/4)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  7. Witschel, H.F.: Global and local resources for peer-to-peer text retrieval (2008) 0.02
    0.022674438 = product of:
      0.045348875 = sum of:
        0.0058264043 = product of:
          0.023305617 = sum of:
            0.023305617 = weight(_text_:based in 127) [ClassicSimilarity], result of:
              0.023305617 = score(doc=127,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.1647731 = fieldWeight in 127, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=127)
          0.25 = coord(1/4)
        0.039522473 = weight(_text_:term in 127) [ClassicSimilarity], result of:
          0.039522473 = score(doc=127,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.18043491 = fieldWeight in 127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.02734375 = fieldNorm(doc=127)
      0.5 = coord(2/4)
    
    Abstract
    This thesis is organised as follows: Chapter 2 gives a general introduction to the field of information retrieval, covering its most important aspects. Further, the tasks of distributed and peer-to-peer information retrieval (P2PIR) are introduced, motivating their application and characterising the special challenges that they involve, including a review of existing architectures and search protocols in P2PIR. Finally, chapter 2 presents approaches to evaluating the e ectiveness of both traditional and peer-to-peer IR systems. Chapter 3 contains a detailed account of state-of-the-art information retrieval models and algorithms. This encompasses models for matching queries against document representations, term weighting algorithms, approaches to feedback and associative retrieval as well as distributed retrieval. It thus defines important terminology for the following chapters. The notion of "multi-level association graphs" (MLAGs) is introduced in chapter 4. An MLAG is a simple, graph-based framework that allows to model most of the theoretical and practical approaches to IR presented in chapter 3. Moreover, it provides an easy-to-grasp way of defining and including new entities into IR modeling, such as paragraphs or peers, dividing them conceptually while at the same time connecting them to each other in a meaningful way. This allows for a unified view on many IR tasks, including that of distributed and peer-to-peer search. Starting from related work and a formal defiition of the framework, the possibilities of modeling that it provides are discussed in detail, followed by an experimental section that shows how new insights gained from modeling inside the framework can lead to novel combinations of principles and eventually to improved retrieval effectiveness.
    Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. Chapter 5 empirically tackles the first of the two research questions formulated above, namely the question of global collection statistics. More precisely, it studies possibilities of radically simplified results merging. The simplification comes from the attempt - without having knowledge of the complete collection - to equip all peers with the same global statistics, making document scores comparable across peers. What is examined, is the question of how we can obtain such global statistics and to what extent their use will lead to a drop in retrieval effectiveness. In chapter 6, the second research question is tackled, namely that of making forwarding decisions for queries, based on profiles of other peers. After a review of related work in that area, the chapter first defines the approaches that will be compared against each other. Then, a novel evaluation framework is introduced, including a new measure for comparing results of a distributed search engine against those of a centralised one. Finally, the actual evaluation is performed using the new framework.
  8. Neet, H.: Assoziationsrelationen in Dokumentationslexika für die verbale Sacherschließung (1984) 0.02
    0.02258427 = product of:
      0.09033708 = sum of:
        0.09033708 = weight(_text_:term in 1254) [ClassicSimilarity], result of:
          0.09033708 = score(doc=1254,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.41242266 = fieldWeight in 1254, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=1254)
      0.25 = coord(1/4)
    
    Abstract
    Thesauri und Dokumentationslexika können als Varianten von onomasiologischen Wörterbüchern aufgefaßt werden, deren besonderes Interesse für die Linguistik darin besteht, daß Äquivalenz-, Hierarchie- und Assoziationsrelationen angegegeben werden. Regelwerke und Beiträge werden besprochen, die sich mit der Ausweisung von "verwandten" Begriffen in der bibliothekarischen und dokumentarischen Praxis befassen. Belege zu Musterbeispielen von "siehe auch"- und "related term"-Verweisungen werden anhand von drei deutschsprachigen Schlagwortregistern aufgelistet. Die Assoziationsrelationen werden in paradigmatische und syntagmatische Beziehungen eingeteilt. Auch Gruppierungen nach Begriffsfeldern und Assoziationsfeldern sind möglich. Untersuchungen von Assoziationsrelationen im Sachbereich "Buchwesen" bestätigen die Vermutung, daß die Mehrzahl der Verweisungen das gemeinsame Vorkommen bestimmter Begriffe in typischen Kontexten der außersprachlichen Wirklichkeit betrifft.
  9. Hannech, A.: Système de recherche d'information étendue basé sur une projection multi-espaces (2018) 0.02
    0.020024482 = product of:
      0.040048964 = sum of:
        0.0040776334 = product of:
          0.016310534 = sum of:
            0.016310534 = weight(_text_:based in 4472) [ClassicSimilarity], result of:
              0.016310534 = score(doc=4472,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.11531715 = fieldWeight in 4472, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4472)
          0.25 = coord(1/4)
        0.035971332 = weight(_text_:frequency in 4472) [ClassicSimilarity], result of:
          0.035971332 = score(doc=4472,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.1301241 = fieldWeight in 4472, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.015625 = fieldNorm(doc=4472)
      0.5 = coord(2/4)
    
    Abstract
    However, this assumption does not hold in all cases, the needs of the user evolve over time and can move away from his previous interests stored in his profile. In other cases, the user's profile may be misused to extract or infer new information needs. This problem is much more accentuated with ambiguous queries. When multiple POIs linked to a search query are identified in the user's profile, the system is unable to select the relevant data from that profile to respond to that request. This has a direct impact on the quality of the results provided to this user. In order to overcome some of these limitations, in this research thesis, we have been interested in the development of techniques aimed mainly at improving the relevance of the results of current SRIs and facilitating the exploration of major collections of documents. To do this, we propose a solution based on a new concept and model of indexing and information retrieval called multi-spaces projection. This proposal is based on the exploitation of different categories of semantic and social information that enrich the universe of document representation and search queries in several dimensions of interpretations. The originality of this representation is to be able to distinguish between the different interpretations used for the description and the search for documents. This gives a better visibility on the results returned and helps to provide a greater flexibility of search and exploration, giving the user the ability to navigate one or more views of data that interest him the most. In addition, the proposed multidimensional representation universes for document description and search query interpretation help to improve the relevance of the user's results by providing a diversity of research / exploration that helps meet his diverse needs and those of other different users. This study exploits different aspects that are related to the personalized search and aims to solve the problems caused by the evolution of the information needs of the user. Thus, when the profile of this user is used by our system, a technique is proposed and used to identify the interests most representative of his current needs in his profile. This technique is based on the combination of three influential factors, including the contextual, frequency and temporal factor of the data. The ability of users to interact, exchange ideas and opinions, and form social networks on the Web, has led systems to focus on the types of interactions these users have at the level of interaction between them as well as their social roles in the system. This social information is discussed and integrated into this research work. The impact and how they are integrated into the IR process are studied to improve the relevance of the results.
  10. Carlin, S.A.: Schlagwortvergabe durch Nutzende (Tagging) als Hilfsmittel zur Suche im Web : Ansatz, Modelle, Realisierungen (2006) 0.01
    0.014115169 = product of:
      0.056460675 = sum of:
        0.056460675 = weight(_text_:term in 2476) [ClassicSimilarity], result of:
          0.056460675 = score(doc=2476,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.25776416 = fieldWeight in 2476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2476)
      0.25 = coord(1/4)
    
    Abstract
    Nach dem zu Beginn der Ära des World Wide Web von Hand gepflegte Linklisten und -Verzeichnisse und an Freunde und Kollegen per E-Mail verschickte Links genügten, um die Informationen zu finden, nach denen man suchte, waren schon bald Volltextsuchmaschinen und halbautomatisch betriebene Kataloge notwendig, um den mehr und mehr anschwellenden Informationsfluten des Web Herr zu werden. Heute bereits sind diese Dämme gebrochen und viele Millionen Websites halten Billionen an Einzelseiten mit Informationen vor, von Datenbanken und anderweitig versteckten Informationen ganz zu schweigen. Mit Volltextsuchmaschinen erreicht man bei dieser Masse keine befriedigenden Ergebnisse mehr. Entweder man erzeugt lange Suchterme mit vielen Ausschließungen und ebenso vielen nicht-exklusiven ODER-Verknüpfungen um verschiedene Schreibweisen für den gleichen Term abzudecken oder man wählt von vornherein die Daten-Quelle, an die man seine Fragen stellt, genau aus. Doch oft bleiben nur klassische Web-Suchmaschinen übrig, zumal wenn der Fragende kein Informationsspezialist mit Kenntnissen von Spezialdatenbanken ist, sondern, von dieser Warte aus gesehenen, ein Laie. Und nicht nur im Web selbst, auch in unternehmensinternen Intranets steht man vor diesem Problem. Tausende von indizierten Dokumente mögen ein Eckdatum sein, nach dem sich der Erfolg der Einführung eines Intranets bemessen lässt, aber eine Aussage über die Nützlichkeit ist damit nicht getroffen. Und die bleibt meist hinter den Erwartungen zurück, vor allem bei denen Mitarbeitern, die tatsächlich mit dem Intranet arbeiten müssen. Entscheidend ist für die Informationsauffindung in Inter- und Intranet eine einfach zu nutzende und leicht anpassbare Möglichkeit, neue interessante Inhalte zu entdecken. Mit Tags steht eine mögliche Lösung bereit.
  11. Stünkel, M.: Neuere Methoden der inhaltlichen Erschließung schöner Literatur in öffentlichen Bibliotheken (1986) 0.01
    0.012720408 = product of:
      0.05088163 = sum of:
        0.05088163 = product of:
          0.10176326 = sum of:
            0.10176326 = weight(_text_:22 in 5815) [ClassicSimilarity], result of:
              0.10176326 = score(doc=5815,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.61904186 = fieldWeight in 5815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5815)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    4. 8.2006 21:35:22
  12. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.01
    0.012588892 = product of:
      0.025177784 = sum of:
        0.012457376 = product of:
          0.049829505 = sum of:
            0.049829505 = weight(_text_:based in 4399) [ClassicSimilarity], result of:
              0.049829505 = score(doc=4399,freq=14.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.35229972 = fieldWeight in 4399, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.25 = coord(1/4)
        0.012720408 = product of:
          0.025440816 = sum of:
            0.025440816 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
              0.025440816 = score(doc=4399,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.15476047 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Indexing plays a vital role in Information Retrieval. With the availability of huge volume of information, it has become necessary to index the information in such a way to make easier for the end users to find the information they want efficiently and accurately. Keyword-based indexing uses words as indexing terms. It is not capable of capturing the implicit relation among terms or the semantics of the words in the document. To eliminate this limitation, ontology-based indexing came into existence, which allows semantic based indexing to solve complex and indirect user queries. Ontologies are used for document indexing which allows semantic based information retrieval. Existing ontologies or the ones constructed from scratch are used presently for indexing. Constructing ontologies from scratch is a labor-intensive task and requires extensive domain knowledge whereas use of an existing ontology may leave some important concepts in documents un-annotated. Using multiple ontologies can overcome the problem of missing out concepts to a great extent, but it is difficult to manage (changes in ontologies over time by their developers) multiple ontologies and ontology heterogeneity also arises due to ontologies constructed by different ontology developers. One possible solution to managing multiple ontologies and build from scratch is to use modular ontologies for indexing.
    Modular ontologies are built in modular manner by combining modules from multiple relevant ontologies. Ontology heterogeneity also arises during modular ontology construction because multiple ontologies are being dealt with, during this process. Ontologies need to be aligned before using them for modular ontology construction. The existing approaches for ontology alignment compare all the concepts of each ontology to be aligned, hence not optimized in terms of time and search space utilization. A new indexing technique is proposed based on modular ontology. An efficient ontology alignment technique is proposed to solve the heterogeneity problem during the construction of modular ontology. Results are satisfactory as Precision and Recall are improved by (8%) and (10%) respectively. The value of Pearsons Correlation Coefficient for degree of similarity, time, search space requirement, precision and recall are close to 1 which shows that the results are significant. Further research can be carried out for using modular ontology based indexing technique for Multimedia Information Retrieval and Bio-Medical information retrieval.
    Date
    20. 1.2015 18:30:22
  13. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.01
    0.011649815 = product of:
      0.04659926 = sum of:
        0.04659926 = product of:
          0.18639705 = sum of:
            0.18639705 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.18639705 = score(doc=4997,freq=2.0), product of:
                0.39798802 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04694356 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  14. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.01
    0.011649815 = product of:
      0.04659926 = sum of:
        0.04659926 = product of:
          0.18639705 = sum of:
            0.18639705 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.18639705 = score(doc=4388,freq=2.0), product of:
                0.39798802 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04694356 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  15. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.01
    0.011649815 = product of:
      0.04659926 = sum of:
        0.04659926 = product of:
          0.18639705 = sum of:
            0.18639705 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.18639705 = score(doc=855,freq=2.0), product of:
                0.39798802 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04694356 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  16. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.01
    0.011649815 = product of:
      0.04659926 = sum of:
        0.04659926 = product of:
          0.18639705 = sum of:
            0.18639705 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.18639705 = score(doc=1000,freq=2.0), product of:
                0.39798802 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04694356 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  17. Slavic-Overfield, A.: Classification management and use in a networked environment : the case of the Universal Decimal Classification (2005) 0.01
    0.011292135 = product of:
      0.04516854 = sum of:
        0.04516854 = weight(_text_:term in 2191) [ClassicSimilarity], result of:
          0.04516854 = score(doc=2191,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.20621133 = fieldWeight in 2191, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=2191)
      0.25 = coord(1/4)
    
    Abstract
    In the Internet information space, advanced information retrieval (IR) methods and automatic text processing are used in conjunction with traditional knowledge organization systems (KOS). New information technology provides a platform for better KOS publishing, exploitation and sharing both for human and machine use. Networked KOS services are now being planned and developed as powerful tools for resource discovery. They will enable automatic contextualisation, interpretation and query matching to different indexing languages. The Semantic Web promises to be an environment in which the quality of semantic relationships in bibliographic classification systems can be fully exploited. Their use in the networked environment is, however, limited by the fact that they are not prepared or made available for advanced machine processing. The UDC was chosen for this research because of its widespread use and its long-term presence in online information retrieval systems. It was also the first system to be used for the automatic classification of Internet resources, and the first to be made available as a classification tool on the Web. The objective of this research is to establish the advantages of using UDC for information retrieval in a networked environment, to highlight the problems of automation and classification exchange, and to offer possible solutions. The first research question was is there enough evidence of the use of classification on the Internet to justify further development with this particular environment in mind? The second question is what are the automation requirements for the full exploitation of UDC and its exchange? The third question is which areas are in need of improvement and what specific recommendations can be made for implementing the UDC in a networked environment? A summary of changes required in the management and development of the UDC to facilitate its full adaptation for future use is drawn from this analysis.
  18. Tavakolizadeh-Ravari, M.: Analysis of the long term dynamics in thesaurus developments and its consequences (2017) 0.01
    0.011292135 = product of:
      0.04516854 = sum of:
        0.04516854 = weight(_text_:term in 3081) [ClassicSimilarity], result of:
          0.04516854 = score(doc=3081,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.20621133 = fieldWeight in 3081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=3081)
      0.25 = coord(1/4)
    
  19. Menges, T.: Möglichkeiten und Grenzen der Übertragbarkeit eines Buches auf Hypertext am Beispiel einer französischen Grundgrammatik (Klein; Kleineidam) (1997) 0.01
    0.011130357 = product of:
      0.04452143 = sum of:
        0.04452143 = product of:
          0.08904286 = sum of:
            0.08904286 = weight(_text_:22 in 1496) [ClassicSimilarity], result of:
              0.08904286 = score(doc=1496,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.5416616 = fieldWeight in 1496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1496)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.1998 18:23:25
  20. Schneider, A.: ¬Die Verzeichnung und sachliche Erschließung der Belletristik in Kaysers Bücherlexikon und im Schlagwortkatalog Georg/Ost (1980) 0.01
    0.011130357 = product of:
      0.04452143 = sum of:
        0.04452143 = product of:
          0.08904286 = sum of:
            0.08904286 = weight(_text_:22 in 5309) [ClassicSimilarity], result of:
              0.08904286 = score(doc=5309,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.5416616 = fieldWeight in 5309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5309)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    5. 8.2006 13:07:22

Languages

  • d 41
  • e 28
  • f 1
  • hu 1
  • pt 1
  • More… Less…

Types