Search (18 results, page 1 of 1)

  • × type_ss:"el"
  • × type_ss:"x"
  1. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.22
    0.21710056 = product of:
      0.50656796 = sum of:
        0.019483384 = product of:
          0.097416915 = sum of:
            0.097416915 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.097416915 = score(doc=4388,freq=2.0), product of:
                0.20800096 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.02453417 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.2 = coord(1/5)
        0.097416915 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.097416915 = score(doc=4388,freq=2.0), product of:
            0.20800096 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02453417 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.097416915 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.097416915 = score(doc=4388,freq=2.0), product of:
            0.20800096 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02453417 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.097416915 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.097416915 = score(doc=4388,freq=2.0), product of:
            0.20800096 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02453417 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.097416915 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.097416915 = score(doc=4388,freq=2.0), product of:
            0.20800096 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02453417 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.097416915 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.097416915 = score(doc=4388,freq=2.0), product of:
            0.20800096 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02453417 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.42857143 = coord(6/14)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  2. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.02
    0.015440235 = product of:
      0.07205443 = sum of:
        0.032266766 = weight(_text_:system in 3829) [ClassicSimilarity], result of:
          0.032266766 = score(doc=3829,freq=8.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.41757566 = fieldWeight in 3829, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.0100241685 = weight(_text_:information in 3829) [ClassicSimilarity], result of:
          0.0100241685 = score(doc=3829,freq=8.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23274569 = fieldWeight in 3829, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.029763501 = weight(_text_:retrieval in 3829) [ClassicSimilarity], result of:
          0.029763501 = score(doc=3829,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.40105087 = fieldWeight in 3829, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
      0.21428572 = coord(3/14)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
    Source
    Information Systems. 37(2012) no. 4, S.294-305
  3. Francu, V.: Multilingual access to information using an intermediate language (2003) 0.01
    0.010861087 = product of:
      0.05068507 = sum of:
        0.024050226 = weight(_text_:system in 1742) [ClassicSimilarity], result of:
          0.024050226 = score(doc=1742,freq=10.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.31124252 = fieldWeight in 1742, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=1742)
        0.009450877 = weight(_text_:information in 1742) [ClassicSimilarity], result of:
          0.009450877 = score(doc=1742,freq=16.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.21943474 = fieldWeight in 1742, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1742)
        0.017183965 = weight(_text_:retrieval in 1742) [ClassicSimilarity], result of:
          0.017183965 = score(doc=1742,freq=6.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.23154683 = fieldWeight in 1742, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1742)
      0.21428572 = coord(3/14)
    
    Abstract
    While being theoretically so widely available, information can be restricted from a more general use by linguistic barriers. The linguistic aspects of the information languages and particularly the chances of an enhanced access to information by means of multilingual access facilities will make the substance of this thesis. The main problem of this research is thus to demonstrate that information retrieval can be improved by using multilingual thesaurus terms based on an intermediate or switching language to search with. Universal classification systems in general can play the role of switching languages for reasons dealt with in the forthcoming pages. The Universal Decimal Classification (UDC) in particular is the classification system used as example of a switching language for our objectives. The question may arise: why a universal classification system and not another thesaurus? Because the UDC like most of the classification systems uses symbols. Therefore, it is language independent and the problems of compatibility between such a thesaurus and different other thesauri in different languages are avoided. Another question may still arise? Why not then, assign running numbers to the descriptors in a thesaurus and make a switching language out of the resulting enumerative system? Because of some other characteristics of the UDC: hierarchical structure and terminological richness, consistency and control. One big problem to find an answer to is: can a thesaurus be made having as a basis a classification system in any and all its parts? To what extent this question can be given an affirmative answer? This depends much on the attributes of the universal classification system which can be favourably used to this purpose. Examples of different situations will be given and discussed upon beginning with those classes of UDC which are best fitted for building a thesaurus structure out of them (classes which are both hierarchical and faceted)...
    Content
    Inhalt: INFORMATION LANGUAGES: A LINGUISTIC APPROACH MULTILINGUAL ASPECTS IN INFORMATION STORAGE AND RETRIEVAL COMPATIBILITY AND CONVERTIBILITY OF INFORMATION LANGUAGES CURRENT TRENDS IN MULTILINGUAL ACCESS BUILDING UDC-BASED MULTILINGUAL THESAURI ONLINE APPLICATIONS OF THE UDC-BASED MULTILINGUAL THESAURI THE IMPACT OF SPECIFICITY ON THE RETRIEVAL POWER OF A UDC-BASED MULTILINGUAL THESAURUS FINAL REMARKS AND GENERAL CONCLUSIONS Proefschrift voorgelegd tot het behalen van de graad van doctor in de Taal- en Letterkunde aan de Universiteit Antwerpen. - Vgl.: http://dlist.sir.arizona.edu/1862/.
  4. Artemenko, O.; Shramko, M.: Entwicklung eines Werkzeugs zur Sprachidentifikation in mono- und multilingualen Texten (2005) 0.01
    0.0069961473 = product of:
      0.032648686 = sum of:
        0.021043949 = weight(_text_:system in 572) [ClassicSimilarity], result of:
          0.021043949 = score(doc=572,freq=10.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.2723372 = fieldWeight in 572, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.0029237159 = weight(_text_:information in 572) [ClassicSimilarity], result of:
          0.0029237159 = score(doc=572,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.06788416 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.008681021 = weight(_text_:retrieval in 572) [ClassicSimilarity], result of:
          0.008681021 = score(doc=572,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.11697317 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
      0.21428572 = coord(3/14)
    
    Abstract
    Identifikation der Sprache bzw. Sprachen elektronischer Textdokumente ist einer der wichtigsten Schritte in vielen Prozessen maschineller Textverarbeitung. Die vorliegende Arbeit stellt LangIdent, ein System zur Sprachidentifikation von mono- und multilingualen elektronischen Textdokumenten vor. Das System bietet sowohl eine Auswahl von gängigen Algorithmen für die Sprachidentifikation monolingualer Textdokumente als auch einen neuen Algorithmus für die Sprachidentifikation multilingualer Textdokumente.
    Mit der Verbreitung des Internets vermehrt sich die Menge der im World Wide Web verfügbaren Dokumente. Die Gewährleistung eines effizienten Zugangs zu gewünschten Informationen für die Internetbenutzer wird zu einer großen Herausforderung an die moderne Informationsgesellschaft. Eine Vielzahl von Werkzeugen wird bereits eingesetzt, um den Nutzern die Orientierung in der wachsenden Informationsflut zu erleichtern. Allerdings stellt die enorme Menge an unstrukturierten und verteilten Informationen nicht die einzige Schwierigkeit dar, die bei der Entwicklung von Werkzeugen dieser Art zu bewältigen ist. Die zunehmende Vielsprachigkeit von Web-Inhalten resultiert in dem Bedarf an Sprachidentifikations-Software, die Sprache/en von elektronischen Dokumenten zwecks gezielter Weiterverarbeitung identifiziert. Solche Sprachidentifizierer können beispielsweise effektiv im Bereich des Multilingualen Information Retrieval eingesetzt werden, da auf den Sprachidentifikationsergebnissen Prozesse der automatischen Indexbildung wie Stemming, Stoppwörterextraktion etc. aufbauen. In der vorliegenden Arbeit wird das neue System "LangIdent" zur Sprachidentifikation von elektronischen Textdokumenten vorgestellt, das in erster Linie für Lehre und Forschung an der Universität Hildesheim verwendet werden soll. "LangIdent" enthält eine Auswahl von gängigen Algorithmen zu der monolingualen Sprachidentifikation, die durch den Benutzer interaktiv ausgewählt und eingestellt werden können. Zusätzlich wurde im System ein neuer Algorithmus implementiert, der die Identifikation von Sprachen, in denen ein multilinguales Dokument verfasst ist, ermöglicht. Die Identifikation beschränkt sich nicht nur auf eine Aufzählung von gefundenen Sprachen, vielmehr wird der Text in monolinguale Abschnitte aufgeteilt, jeweils mit der Angabe der identifizierten Sprache.
    Die Arbeit wird in zwei Hauptteile gegliedert. Der erste Teil besteht aus Kapiteln 1-5, in denen theoretische Grundlagen zum Thema Sprachidentifikation dargelegt werden. Das erste Kapitel beschreibt den Sprachidentifikationsprozess und definiert grundlegende Begriffe. Im zweiten und dritten Kapitel werden vorherrschende Ansätze zur Sprachidentifikation von monolingualen Dokumenten dargestellt und miteinander verglichen, indem deren Vor- und Nachteile diskutiert werden. Das vierte Kapitel stellt einige Arbeiten vor, die sich mit der Sprachidentifikation von multilingualen Texten befasst haben. Der erste Teil der Arbeit wird mit einem Überblick über die bereits entwickelten und im Internet verfügbaren Sprachidentifikationswerkzeuge abgeschlossen. Der zweite Teil der Arbeit stellt die Entwicklung des Sprachidentifikationssystems LangIdent dar. In den Kapiteln 6 und 7 werden die an das System gestellten Anforderungen zusammengefasst und die wichtigsten Phasen des Projekts definiert. In den weiterführenden Kapiteln 8 und 9 werden die Systemarchitektur und eine detaillierte Beschreibung ihrer Kernkomponenten gegeben. Das Kapitel 10 liefert ein statisches UML-Klassendiagramm mit einer ausführlichen Erklärung von Attributen und Methoden der im Diagramm vorgestellten Klassen. Das nächste Kapitel befasst sich mit den im Prozess der Systementwicklung aufgetretenen Problemen. Die Bedienung des Programms wird im Kapitel 12 beschrieben. Im letzten Kapitel der Arbeit wird die Systemevaluierung vorgestellt, in der der Aufbau und Umfang von Trainingskorpora sowie die wichtigsten Ergebnisse mit der anschließenden Diskussion präsentiert werden.
  5. Styltsvig, H.B.: Ontology-based information retrieval (2006) 0.01
    0.0052716834 = product of:
      0.036901783 = sum of:
        0.0088404855 = weight(_text_:information in 1154) [ClassicSimilarity], result of:
          0.0088404855 = score(doc=1154,freq=14.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.20526241 = fieldWeight in 1154, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.028061297 = weight(_text_:retrieval in 1154) [ClassicSimilarity], result of:
          0.028061297 = score(doc=1154,freq=16.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.37811437 = fieldWeight in 1154, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
      0.14285715 = coord(2/14)
    
    Abstract
    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun phrases. Furthermore, we briefly cover the identification of semantic relations inside and between noun phrases, as well as discuss which kind of problems are caused by an increase in compoundness with respect to the structure of concepts in the evaluation of queries. Measuring similarity between concepts based on distances in the structure of the ontology is discussed. In addition, a shared nodes measure is introduced and, based on a set of intuitive similarity properties, compared to a number of different measures. In this comparison the shared nodes measure appears to be superior, though more computationally complex. Some of the major problems of shared nodes which relate to the way relations differ with respect to the degree they bring the concepts they connect closer are discussed. A generalized measure called weighted shared nodes is introduced to deal with these problems. Finally, the utilization of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems, it appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.
  6. Thornton, K: Powerful structure : inspecting infrastructures of information organization in Wikimedia Foundation projects (2016) 0.00
    0.004860487 = product of:
      0.034023408 = sum of:
        0.022816047 = weight(_text_:system in 3288) [ClassicSimilarity], result of:
          0.022816047 = score(doc=3288,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.29527056 = fieldWeight in 3288, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3288)
        0.011207362 = weight(_text_:information in 3288) [ClassicSimilarity], result of:
          0.011207362 = score(doc=3288,freq=10.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.2602176 = fieldWeight in 3288, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3288)
      0.14285715 = coord(2/14)
    
    Abstract
    This dissertation investigates the social and technological factors of collaboratively organizing information in commons-based peer production systems. To do so, it analyzes the diverse strategies that members of Wikimedia Foundation (WMF) project communities use to organize information. Key findings from this dissertation show that conceptual structures of information organization are encoded into the infrastructure of WMF projects. The fact that WMF projects are commons-based peer production systems means that we can inspect the code that enables these systems, but a specific type of technical literacy is required to do so. I use three methods in this dissertation. I conduct a qualitative content analysis of the discussions surrounding the design, implementation and evaluation of the category system; a quantitative analysis using descriptive statistics of patterns of editing among editors who contributed to the code of templates for information boxes; and a close reading of the infrastructure used to create the category system, the infobox templates, and the knowledge base of structured data.
  7. Thomi, M.: Überblick und Bewertung von Musiksuchmaschinen (2011) 0.00
    0.0040191617 = product of:
      0.02813413 = sum of:
        0.0070881573 = weight(_text_:information in 3046) [ClassicSimilarity], result of:
          0.0070881573 = score(doc=3046,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16457605 = fieldWeight in 3046, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3046)
        0.021045974 = weight(_text_:retrieval in 3046) [ClassicSimilarity], result of:
          0.021045974 = score(doc=3046,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.2835858 = fieldWeight in 3046, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3046)
      0.14285715 = coord(2/14)
    
    Abstract
    Die wachsende Anzahl an Musik in Form von Audiodateien im Internet und deren Beliebtheit bei Internetnutzern auf der ganzen Welt erfordert praktikable Retrieval-Lösungen. Das Feld des Musik Information Retrievals (MIR) beinhaltet unter anderem die Erarbeitung von Musik Information Retrieval Systemen mit unterschiedlichen, teilweise multimedialen Lösungsansätzen. Die Funktionsweise von MIR-Systemen (= Musiksuchmaschinen), die textbasiert, und solchen, die mit Mustererkennung operieren, wird in dieser Arbeit erläutert. Des Weiteren werden im Sinne eines bewerteten State-of-the-Arts gratis zugängliche Musiksuchmaschinen im WWW betrachtet, die den Bereich Pop/Rock abdecken. Basierend auf diesem State-of-the-Art und auf Zweitbewertungen werden Empfehlungen in Form von Anforderungen an Musiksuchmaschinen formuliert und mögliche Zukunftsszeniaren aufgezeigt.
  8. Hüsken, P.: Information Retrieval im Semantic Web (2006) 0.00
    0.0040191617 = product of:
      0.02813413 = sum of:
        0.0070881573 = weight(_text_:information in 4333) [ClassicSimilarity], result of:
          0.0070881573 = score(doc=4333,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16457605 = fieldWeight in 4333, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
        0.021045974 = weight(_text_:retrieval in 4333) [ClassicSimilarity], result of:
          0.021045974 = score(doc=4333,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.2835858 = fieldWeight in 4333, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
      0.14285715 = coord(2/14)
    
    Abstract
    Das Semantic Web bezeichnet ein erweitertes World Wide Web (WWW), das die Bedeutung von präsentierten Inhalten in neuen standardisierten Sprachen wie RDF Schema und OWL modelliert. Diese Arbeit befasst sich mit dem Aspekt des Information Retrieval, d.h. es wird untersucht, in wie weit Methoden der Informationssuche sich auf modelliertes Wissen übertragen lassen. Die kennzeichnenden Merkmale von IR-Systemen wie vage Anfragen sowie die Unterstützung unsicheren Wissens werden im Kontext des Semantic Web behandelt. Im Fokus steht die Suche nach Fakten innerhalb einer Wissensdomäne, die entweder explizit modelliert sind oder implizit durch die Anwendung von Inferenz abgeleitet werden können. Aufbauend auf der an der Universität Duisburg-Essen entwickelten Retrievalmaschine PIRE wird die Anwendung unsicherer Inferenz mit probabilistischer Prädikatenlogik (pDatalog) implementiert.
  9. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.00
    0.0039532017 = product of:
      0.018448275 = sum of:
        0.0067222426 = weight(_text_:system in 4232) [ClassicSimilarity], result of:
          0.0067222426 = score(doc=4232,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.08699492 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.0055253035 = weight(_text_:information in 4232) [ClassicSimilarity], result of:
          0.0055253035 = score(doc=4232,freq=14.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.128289 = fieldWeight in 4232, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.0062007294 = weight(_text_:retrieval in 4232) [ClassicSimilarity], result of:
          0.0062007294 = score(doc=4232,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.08355226 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
      0.21428572 = coord(3/14)
    
    Abstract
    After the launch of the World Wide Web, it became clear that searching documentson the Web would not be trivial. Well-known engines to search the web, like Google, focus on search in web documents using keywords. The documents are structured and indexed to ensure keywords match documents as accurately as possible. However, searching by keywords does not always suice. It is oen the case that users do not know exactly how to formulate the search query or which keywords guarantee retrieving the most relevant documents. Besides that, it occurs that users rather want to browse information than looking up something specific. It turned out that there is need for systems that enable more interactivity and facilitate the gradual refinement of search queries to explore the Web. Users expect more from the Web because the short keyword-based queries they pose during search, do not suffice for all cases. On top of that, the Web is changing structurally. The Web comprises, apart from a collection of documents, more and more linked data, pieces of information structured so they can be processed by machines. The consequently applied semantics allow users to exactly indicate machines their search intentions. This is made possible by describing data following controlled vocabularies, concept lists composed by experts, published uniquely identifiable on the Web. Even so, it is still not trivial to explore data on the Web. There is a large variety of vocabularies and various data sources use different terms to identify the same concepts.
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. eries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research.
    When we speak about finding relationships between resources, it is necessary to dive deeper in the structure. The graph structure of linked data where the semantics give meaning to the relationships between resources enable the execution of pathfinding algorithms. The assigned weights and heuristics are base components of such algorithms and ultimately define (the order) which resources are included in a path. These paths explain indirect connections between resources. Our third technique proposes an algorithm that optimizes the choice of resources in terms of serendipity. Some optimizations guard the consistence of candidate-paths where the coherence of consecutive connections is maximized to avoid trivial and too arbitrary paths. The implementation uses the A* algorithm, the de-facto reference when it comes to heuristically optimized minimal cost paths. The effectiveness of paths was measured based on common automatic metrics and surveys where the users could indicate their preference for paths, generated each time in a different way. Finally, all our techniques are applied to a use case about publications in digital libraries where they are aligned with information about scientific conferences and researchers. The application to this use case is a practical example because the different aspects of exploratory search come together. In fact, the techniques also evolved from the experiences when implementing the use case. Practical details about the semantic model are explained and the implementation of the search system is clarified module by module. The evaluation positions the result, a prototype of a tool to explore scientific publications, researchers and conferences next to some important alternatives.
  10. Li, Z.: ¬A domain specific search engine with explicit document relations (2013) 0.00
    0.0035600248 = product of:
      0.024920173 = sum of:
        0.019013375 = weight(_text_:system in 1210) [ClassicSimilarity], result of:
          0.019013375 = score(doc=1210,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.24605882 = fieldWeight in 1210, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
        0.005906798 = weight(_text_:information in 1210) [ClassicSimilarity], result of:
          0.005906798 = score(doc=1210,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.13714671 = fieldWeight in 1210, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
      0.14285715 = coord(2/14)
    
    Abstract
    The current web consists of documents that are highly heterogeneous and hard for machines to understand. The Semantic Web is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.
  11. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.00
    0.0033363807 = product of:
      0.023354664 = sum of:
        0.018629227 = weight(_text_:system in 4746) [ClassicSimilarity], result of:
          0.018629227 = score(doc=4746,freq=6.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.24108742 = fieldWeight in 4746, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=4746)
        0.0047254385 = weight(_text_:information in 4746) [ClassicSimilarity], result of:
          0.0047254385 = score(doc=4746,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.10971737 = fieldWeight in 4746, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4746)
      0.14285715 = coord(2/14)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  12. García Barrios, V.M.: Informationsaufbereitung und Wissensorganisation in transnationalen Konzernen : Konzeption eines Informationssystems für große und geographisch verteilte Unternehmen mit dem Hyperwave Information System (2002) 0.00
    0.0029541154 = product of:
      0.020678807 = sum of:
        0.013444485 = weight(_text_:system in 6000) [ClassicSimilarity], result of:
          0.013444485 = score(doc=6000,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 6000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6000)
        0.0072343214 = weight(_text_:information in 6000) [ClassicSimilarity], result of:
          0.0072343214 = score(doc=6000,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16796975 = fieldWeight in 6000, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6000)
      0.14285715 = coord(2/14)
    
    Abstract
    Transnationale Konzerne haben einen dringenden Bedarf an einer umfassenden Lösung für ihre Intranetsysteme. Die spezifischen Anforderungen an ein wissensbasiertes Informationssystem sind vielfältig, die kritischsten davon sind aber allgemein gültig und ergeben sich aufgrund der stark vernetzten und geographisch verteilten Struktur des Konzerns. In unterschiedlichen Wissensdisziplinen, insbesondere Wissensmanagement, Informationsmanagement, Datenmanagement und Wissensorganisation, versucht man, oftmals in den einzelnen Disziplinen isoliert, die Erfüllung spezifischer Anforderungen zu implementieren. Nicht selten auf eine ineffektive Weise. Die nachfolgende Arbeit verfolgt deshalb einen ganzheitlichen Ansatz über die Wissensdisziplinen, um den umfangreichen Anforderungen gerecht werden zu können. Im Untersuchungsbereich der vorliegenden Arbeit wird die Problematik aus der Sicht der wichtigsten involvierten Wissensdisziplinen beleuchtet, um nach vorhandenen bzw. etablierten Lösungsansätzen zu suchen. Die spezifischen Einflussbereiche der Disziplinen auf Intranetlösungen werden überprüft und kritischen Aspekten von Anforderungen (beispielsweise 'starke örtliche Verteilung vs. Systemtransparenz', 'Replikationsmaßnahmen vs. Systemperformanz' oder 'semantische Wissensmodelle vs. bedarfsgerechten Wissenszugang') gegenübergestellt. Jede Disziplin bietet effiziente und effektive Lösungen für unterschiedliche Aspekte, es konnte jedoch kein umfassendes Gestaltungsmodell, welches die spezifischen Lösungsansätze der Disziplinen vereint, im Rahmen des Rechercheprozesses identifiziert werden. Aufgrund des oben beschriebenen Sachverhalts wird im Gestaltungsbereich dieser Arbeit ein zweiteiliges Technisches Gestaltungsmodell vorgestellt. Es besteht aus einem strategischen Analyseschema und einem funktionalen Komponentenschema, und berücksichtigt die Einflussbereiche oben erwähnter Wissensdisziplinen. Basierend auf der konkreten Anforderung einer Intranetlösung für einen transnationalen - und anonymisiert dargestellten - Konzern, wird das vorgestellte Modell angewandt, und auf Basis des Hyperwave Information Servers die technische Umsetzung eines wissensbasierten Informationssystems, von dem beispielhaft zwei Module näher beschrieben werden, gezeigt.
    Theme
    Information Resources Management
  13. Nagy T., I.: Detecting multiword expressions and named entities in natural language texts (2014) 0.00
    0.0016578196 = product of:
      0.011604737 = sum of:
        0.0029237159 = weight(_text_:information in 1536) [ClassicSimilarity], result of:
          0.0029237159 = score(doc=1536,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.06788416 = fieldWeight in 1536, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1536)
        0.008681021 = weight(_text_:retrieval in 1536) [ClassicSimilarity], result of:
          0.008681021 = score(doc=1536,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.11697317 = fieldWeight in 1536, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1536)
      0.14285715 = coord(2/14)
    
    Abstract
    Multiword expressions (MWEs) are lexical items that can be decomposed into single words and display lexical, syntactic, semantic, pragmatic and/or statistical idiosyncrasy (Sag et al., 2002; Kim, 2008; Calzolari et al., 2002). The proper treatment of multiword expressions such as rock 'n' roll and make a decision is essential for many natural language processing (NLP) applications like information extraction and retrieval, terminology extraction and machine translation, and it is important to identify multiword expressions in context. For example, in machine translation we must know that MWEs form one semantic unit, hence their parts should not be translated separately. For this, multiword expressions should be identified first in the text to be translated. The chief aim of this thesis is to develop machine learning-based approaches for the automatic detection of different types of multiword expressions in English and Hungarian natural language texts. In our investigations, we pay attention to the characteristics of different types of multiword expressions such as nominal compounds, multiword named entities and light verb constructions, and we apply novel methods to identify MWEs in raw texts. In the thesis it will be demonstrated that nominal compounds and multiword amed entities may require a similar approach for their automatic detection as they behave in the same way from a linguistic point of view. Furthermore, it will be shown that the automatic detection of light verb constructions can be carried out using two effective machine learning-based approaches.
  14. Kirk, J.: Theorising information use : managers and their work (2002) 0.00
    0.0015059438 = product of:
      0.021083213 = sum of:
        0.021083213 = weight(_text_:information in 560) [ClassicSimilarity], result of:
          0.021083213 = score(doc=560,freq=26.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.4895196 = fieldWeight in 560, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=560)
      0.071428575 = coord(1/14)
    
    Abstract
    The focus of this thesis is information use. Although a key concept in information behaviour, information use has received little attention from information science researchers. Studies of other key concepts such as information need and information seeking are dominant in information behaviour research. Information use is an area of interest to information professionals who rely on research outcomes to shape their practice. There are few empirical studies of how people actually use information that might guide and refine the development of information systems, products and services.
    Theme
    Information
  15. Klas, C.-P.: DAFFODIL: Strategische Unterstützung bei der Informationssuche in Digitalen Bibliotheken (2007) 0.00
    0.0013580982 = product of:
      0.019013375 = sum of:
        0.019013375 = weight(_text_:system in 1843) [ClassicSimilarity], result of:
          0.019013375 = score(doc=1843,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.24605882 = fieldWeight in 1843, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1843)
      0.071428575 = coord(1/14)
    
    Abstract
    Sowohl die rechnergestützte Informationssuche in einer realen Bibliothek als auch die in einer digitalen Bibliothek stellen heute immer noch ein zeitaufwändiges und damit teures Unterfangen dar. Als wesentliche Gründe können drei Problembereiche identifiziert werden. Zum Ersten existieren zahlreiche Zugangspunkte mit jeweils unterschiedlichen Formularen, Anfragesprachen und unterschiedlicher inhaltlicher Qualität. Zum Zweiten fehlt eine dringend benötigte anbieterübergreifende Integration der Informationen und Dienste. Zum Dritten schließlich wird der Benutzer durch die unbefriedigende Funktionalität nicht ausreichend in seinem Informationssuchprozess unterstützt. Alle diese Punkte führen letztendlich zu langwierigen und damit teuren Suchprozessen. Diese Dissertation stellt sich der Aufgabe, den oben genannten Problembereichen in geeigneter Weise zu begegnen und eine adäquate Lösung zu erarbeiten. Dazu erhält der Benutzer durch strategische Unterstützung in Form von verschiedenen integrierten Diensten von einem aktiven System eine Hilfestellung, um so sein Informationsbedürfnis effektiv und effizient befriedigen zu können. Die Ergebnisse dieser Arbeit, die durch eine ausführliche Evaluation belegt worden sind, bieten sowohl theoretische als auch praktische Lösungen zur Entwicklung und zur Nutzung von digitalen Bibliotheken: - Der theoretische Teil zeigt ein Modell für verteilte Bibliotheksdienste auf, strukturiert diese und stellt sie in einen Gesamtzusammenhang. Dadurch wird die Modellierung neuer Dienste erleichtert und ein positiver Nutzen kann schon im Vorfeld diskutiert werden. - Der praktische Teil basiert auf dem entwickelten Modell und ermöglicht - den Benutzern, effektiv und effizient einer umfassenden Literatursuche nachzugehen und diese auch nachhaltig zu verwalten. - den Entwicklern von digitalen Bibliotheken durch Zugriff auf eine Vielzahl von Basisdiensten dar¨uber hinausgehende Dienste zu entwickeln. Insgesamt kann das Daffodil-System als Basisarchitektur für die Entwicklung und Evaluation von digitalen Bibliotheken verwendet werden und trägt somit zur wissenschaftlichen Forschung in diesem Bereich bei.
  16. Bierbach, P.: Wissensrepräsentation - Gegenstände und Begriffe : Bedingungen des Antinomieproblems bei Frege und Chancen des Begriffssystems bei Lambert (2001) 0.00
    7.682563E-4 = product of:
      0.010755588 = sum of:
        0.010755588 = weight(_text_:system in 4498) [ClassicSimilarity], result of:
          0.010755588 = score(doc=4498,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.13919188 = fieldWeight in 4498, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=4498)
      0.071428575 = coord(1/14)
    
    Abstract
    Die auf Basis vernetzter Computer realisierbare Möglichkeit einer universalen Enzyklopädie führt aufgrund der dabei technisch notwendigen Reduktion auf nur eine Sorte Repräsentanten zu Systemen, bei denen entweder nur Gegenstände repräsentiert werden, die auch Begriffe vertreten, oder nur Begriffe, die auch Gegenstände vertreten. In der Dissertation werden als Beispiele solcher Repräsentationssysteme die logischen Systeme von Gottlob Frege und Johann Heinrich Lambert untersucht. Freges System, basierend auf der Annahme der Objektivität von Bedeutungen, war durch die Nachweisbarkeit einer Antinomie gescheitert, weshalb von Philosophen im 20. Jahrhundert die Existenz einer objektiven Bedeutung von Ausdrücken und die Übersetzbarkeit der Gedanken aus den natürlichen Sprachen in eine formale Sprache in Frage gestellt wurde. In der Dissertation wird nachgewiesen, daß diese Konsequenz voreilig war und daß die Antinomie auch bei Annahme der Objektivität von Wissen erst durch zwei Zusatzforderungen in Freges Logik ausgelöst wird: die eineindeutige Zuordnung eines Gegenstands zu jedem Begriff sowie die scharfen Begrenzung der Begriffe, die zur Abgeschlossenheit des Systems zwingt. Als Alternative wird das Begriffssystem Lamberts diskutiert, bei dem jeder Gegenstand durch einen Begriff und gleichwertig durch Gesamtheiten von Begriffen vertreten wird und Begriffe durch Gesamtheiten von Begriffen ersetzbar sind. Beide die Antinomie auslösenden Bedingungen sind hier nicht vorhanden, zugleich ist die fortschreitende Entwicklung von Wissen repräsentierbar. Durch die mengentheoretische Rekonstruktion des Begriffssystems Lamberts in der Dissertation wird dessen praktische Nutzbarkeit gezeigt. Resultat der Dissertation ist der Nachweis, daß es Repräsentationssysteme gibt, die nicht auf die für die Prüfung der Verbindlichkeit der Einträge in die Enzyklopädie notwendige Annahme der Verobjektivierbarkeit von Wissen verzichten müssen, weil ihnen nicht jene die Antinomie auslösenden Voraussetzungen zugrunde liegen.
  17. Frei, R.: Informationswissenschaftliche Begriffe und Kernprozesse aus Sicht des Radikalen Konstruktivismus (2009) 0.00
    5.06297E-4 = product of:
      0.0070881573 = sum of:
        0.0070881573 = weight(_text_:information in 3268) [ClassicSimilarity], result of:
          0.0070881573 = score(doc=3268,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16457605 = fieldWeight in 3268, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3268)
      0.071428575 = coord(1/14)
    
    Abstract
    Die Informationswissenschaft beruht auf einer positivistisch-ontologischen Sichtweise, welche eine Realität als beschreib- und erfassbar darstellt. In dieser Arbeit werden die Grundbegriffe und exemplarische Kernprozesse der Informationswissenschaft aus Sicht des Radikalen Konstruktivismus betrachtet, einer Erkenntnistheorie, welche besagt, dass der Mensch seine Wirklichkeit nicht passiv erfährt, sondern aktiv konstruiert. Nach einer kurzen Beschreibung der Informationswissenschaft wird zum Radikalen Konstruktivismus übergeleitet und die daraus folgenden Konsequenzen für Verständigung und Wirklichkeit erläutert. Der konventionellen Anschauung von Daten, Information, Wissen, etc. wird dann diese neue Sichtweise entgegengestellt. Darauf aufbauend werden Informationsverhalten, -pathologien und -prozesse vom radikal-konstruktivistischen Standpunkt aus dargestellt. So sollen der Informationswissenschaft ein breiteres Verständnis für ihren Gegenstandsbereich und zusätzliche Kompetenzen vermittelt werden.
    Theme
    Information
  18. Sünkler, S.: Prototypische Entwicklung einer Software für die Erfassung und Analyse explorativer Suchen in Verbindung mit Tests zur Retrievaleffektivität (2012) 0.00
    2.9833836E-4 = product of:
      0.004176737 = sum of:
        0.004176737 = weight(_text_:information in 479) [ClassicSimilarity], result of:
          0.004176737 = score(doc=479,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=479)
      0.071428575 = coord(1/14)
    
    Imprint
    Hamburg : HAW, Department Information