Search (10 results, page 1 of 1)

  • × type_ss:"x"
  • × type_ss:"el"
  1. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.03
    0.034534223 = product of:
      0.06906845 = sum of:
        0.06906845 = product of:
          0.20720533 = sum of:
            0.20720533 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.20720533 = score(doc=4388,freq=2.0), product of:
                0.4424171 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052184064 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  2. Thornton, K: Powerful structure : inspecting infrastructures of information organization in Wikimedia Foundation projects (2016) 0.01
    0.014147157 = product of:
      0.028294314 = sum of:
        0.028294314 = product of:
          0.056588627 = sum of:
            0.056588627 = weight(_text_:systems in 3288) [ClassicSimilarity], result of:
              0.056588627 = score(doc=3288,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.35286134 = fieldWeight in 3288, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3288)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This dissertation investigates the social and technological factors of collaboratively organizing information in commons-based peer production systems. To do so, it analyzes the diverse strategies that members of Wikimedia Foundation (WMF) project communities use to organize information. Key findings from this dissertation show that conceptual structures of information organization are encoded into the infrastructure of WMF projects. The fact that WMF projects are commons-based peer production systems means that we can inspect the code that enables these systems, but a specific type of technical literacy is required to do so. I use three methods in this dissertation. I conduct a qualitative content analysis of the discussions surrounding the design, implementation and evaluation of the category system; a quantitative analysis using descriptive statistics of patterns of editing among editors who contributed to the code of templates for information boxes; and a close reading of the infrastructure used to create the category system, the infobox templates, and the knowledge base of structured data.
  3. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.01
    0.012175934 = product of:
      0.024351869 = sum of:
        0.024351869 = product of:
          0.048703738 = sum of:
            0.048703738 = weight(_text_:systems in 4746) [ClassicSimilarity], result of:
              0.048703738 = score(doc=4746,freq=10.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.3036947 = fieldWeight in 4746, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4746)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  4. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.01
    0.011551105 = product of:
      0.02310221 = sum of:
        0.02310221 = product of:
          0.04620442 = sum of:
            0.04620442 = weight(_text_:systems in 3829) [ClassicSimilarity], result of:
              0.04620442 = score(doc=3829,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.28811008 = fieldWeight in 3829, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
    Source
    Information Systems. 37(2012) no. 4, S.294-305
  5. Kirk, J.: Theorising information use : managers and their work (2002) 0.01
    0.009529176 = product of:
      0.019058352 = sum of:
        0.019058352 = product of:
          0.038116705 = sum of:
            0.038116705 = weight(_text_:systems in 560) [ClassicSimilarity], result of:
              0.038116705 = score(doc=560,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23767869 = fieldWeight in 560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The focus of this thesis is information use. Although a key concept in information behaviour, information use has received little attention from information science researchers. Studies of other key concepts such as information need and information seeking are dominant in information behaviour research. Information use is an area of interest to information professionals who rely on research outcomes to shape their practice. There are few empirical studies of how people actually use information that might guide and refine the development of information systems, products and services.
  6. Baier Benninger, P.: Model requirements for the management of electronic records (MoReq2) : Anleitung zur Umsetzung (2011) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 4343) [ClassicSimilarity], result of:
              0.03267146 = score(doc=4343,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 4343, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4343)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Viele auch kleinere Unternehmen, Verwaltungen und Organisationen sind angesichts eines wachsenden Berges von digitalen Informationen mit dem Ordnen und Strukturieren ihrer Ablagen beschäftigt. In den meisten Organisationen besteht ein Konzept der Dokumentenlenkung. Records Management verfolgt vor allem in zwei Punkten einen weiterführenden Ansatz. Zum einen stellt es über den Geschäftsalltag hinaus den Kontext und den Entstehungszusammenhang ins Zentrum und zum anderen gibt es Regeln vor, wie mit ungenutzten oder inaktiven Dokumenten zu verfahren ist. Mit den «Model Requirements for the Management of Electronic Records» - MoReq - wurde von der europäischen Kommission ein Standard geschaffen, der alle Kernbereiche des Records Managements und damit den gesamten Entstehungs-, Nutzungs-, Archivierungsund Aussonderungsbereich von Dokumenten abdeckt. In der «Anleitung zur Umsetzung» wird die umfangreiche Anforderungsliste von MoReq2 (August 2008) zusammengefasst und durch erklärende Abschnitte ergänzt, mit dem Ziel, als griffiges Instrument bei der Einführung eines Record Management Systems zu dienen.
  7. Francu, V.: Multilingual access to information using an intermediate language (2003) 0.01
    0.007700737 = product of:
      0.015401474 = sum of:
        0.015401474 = product of:
          0.030802948 = sum of:
            0.030802948 = weight(_text_:systems in 1742) [ClassicSimilarity], result of:
              0.030802948 = score(doc=1742,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.19207339 = fieldWeight in 1742, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1742)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    While being theoretically so widely available, information can be restricted from a more general use by linguistic barriers. The linguistic aspects of the information languages and particularly the chances of an enhanced access to information by means of multilingual access facilities will make the substance of this thesis. The main problem of this research is thus to demonstrate that information retrieval can be improved by using multilingual thesaurus terms based on an intermediate or switching language to search with. Universal classification systems in general can play the role of switching languages for reasons dealt with in the forthcoming pages. The Universal Decimal Classification (UDC) in particular is the classification system used as example of a switching language for our objectives. The question may arise: why a universal classification system and not another thesaurus? Because the UDC like most of the classification systems uses symbols. Therefore, it is language independent and the problems of compatibility between such a thesaurus and different other thesauri in different languages are avoided. Another question may still arise? Why not then, assign running numbers to the descriptors in a thesaurus and make a switching language out of the resulting enumerative system? Because of some other characteristics of the UDC: hierarchical structure and terminological richness, consistency and control. One big problem to find an answer to is: can a thesaurus be made having as a basis a classification system in any and all its parts? To what extent this question can be given an affirmative answer? This depends much on the attributes of the universal classification system which can be favourably used to this purpose. Examples of different situations will be given and discussed upon beginning with those classes of UDC which are best fitted for building a thesaurus structure out of them (classes which are both hierarchical and faceted)...
  8. Bierbach, P.: Wissensrepräsentation - Gegenstände und Begriffe : Bedingungen des Antinomieproblems bei Frege und Chancen des Begriffssystems bei Lambert (2001) 0.01
    0.0054452433 = product of:
      0.010890487 = sum of:
        0.010890487 = product of:
          0.021780973 = sum of:
            0.021780973 = weight(_text_:systems in 4498) [ClassicSimilarity], result of:
              0.021780973 = score(doc=4498,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1358164 = fieldWeight in 4498, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4498)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Die auf Basis vernetzter Computer realisierbare Möglichkeit einer universalen Enzyklopädie führt aufgrund der dabei technisch notwendigen Reduktion auf nur eine Sorte Repräsentanten zu Systemen, bei denen entweder nur Gegenstände repräsentiert werden, die auch Begriffe vertreten, oder nur Begriffe, die auch Gegenstände vertreten. In der Dissertation werden als Beispiele solcher Repräsentationssysteme die logischen Systeme von Gottlob Frege und Johann Heinrich Lambert untersucht. Freges System, basierend auf der Annahme der Objektivität von Bedeutungen, war durch die Nachweisbarkeit einer Antinomie gescheitert, weshalb von Philosophen im 20. Jahrhundert die Existenz einer objektiven Bedeutung von Ausdrücken und die Übersetzbarkeit der Gedanken aus den natürlichen Sprachen in eine formale Sprache in Frage gestellt wurde. In der Dissertation wird nachgewiesen, daß diese Konsequenz voreilig war und daß die Antinomie auch bei Annahme der Objektivität von Wissen erst durch zwei Zusatzforderungen in Freges Logik ausgelöst wird: die eineindeutige Zuordnung eines Gegenstands zu jedem Begriff sowie die scharfen Begrenzung der Begriffe, die zur Abgeschlossenheit des Systems zwingt. Als Alternative wird das Begriffssystem Lamberts diskutiert, bei dem jeder Gegenstand durch einen Begriff und gleichwertig durch Gesamtheiten von Begriffen vertreten wird und Begriffe durch Gesamtheiten von Begriffen ersetzbar sind. Beide die Antinomie auslösenden Bedingungen sind hier nicht vorhanden, zugleich ist die fortschreitende Entwicklung von Wissen repräsentierbar. Durch die mengentheoretische Rekonstruktion des Begriffssystems Lamberts in der Dissertation wird dessen praktische Nutzbarkeit gezeigt. Resultat der Dissertation ist der Nachweis, daß es Repräsentationssysteme gibt, die nicht auf die für die Prüfung der Verbindlichkeit der Einträge in die Enzyklopädie notwendige Annahme der Verobjektivierbarkeit von Wissen verzichten müssen, weil ihnen nicht jene die Antinomie auslösenden Voraussetzungen zugrunde liegen.
  9. Styltsvig, H.B.: Ontology-based information retrieval (2006) 0.01
    0.0054452433 = product of:
      0.010890487 = sum of:
        0.010890487 = product of:
          0.021780973 = sum of:
            0.021780973 = weight(_text_:systems in 1154) [ClassicSimilarity], result of:
              0.021780973 = score(doc=1154,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1358164 = fieldWeight in 1154, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1154)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun phrases. Furthermore, we briefly cover the identification of semantic relations inside and between noun phrases, as well as discuss which kind of problems are caused by an increase in compoundness with respect to the structure of concepts in the evaluation of queries. Measuring similarity between concepts based on distances in the structure of the ontology is discussed. In addition, a shared nodes measure is introduced and, based on a set of intuitive similarity properties, compared to a number of different measures. In this comparison the shared nodes measure appears to be superior, though more computationally complex. Some of the major problems of shared nodes which relate to the way relations differ with respect to the degree they bring the concepts they connect closer are discussed. A generalized measure called weighted shared nodes is introduced to deal with these problems. Finally, the utilization of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems, it appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.
  10. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.00
    0.0034032771 = product of:
      0.0068065543 = sum of:
        0.0068065543 = product of:
          0.013613109 = sum of:
            0.013613109 = weight(_text_:systems in 4232) [ClassicSimilarity], result of:
              0.013613109 = score(doc=4232,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.08488525 = fieldWeight in 4232, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4232)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    After the launch of the World Wide Web, it became clear that searching documentson the Web would not be trivial. Well-known engines to search the web, like Google, focus on search in web documents using keywords. The documents are structured and indexed to ensure keywords match documents as accurately as possible. However, searching by keywords does not always suice. It is oen the case that users do not know exactly how to formulate the search query or which keywords guarantee retrieving the most relevant documents. Besides that, it occurs that users rather want to browse information than looking up something specific. It turned out that there is need for systems that enable more interactivity and facilitate the gradual refinement of search queries to explore the Web. Users expect more from the Web because the short keyword-based queries they pose during search, do not suffice for all cases. On top of that, the Web is changing structurally. The Web comprises, apart from a collection of documents, more and more linked data, pieces of information structured so they can be processed by machines. The consequently applied semantics allow users to exactly indicate machines their search intentions. This is made possible by describing data following controlled vocabularies, concept lists composed by experts, published uniquely identifiable on the Web. Even so, it is still not trivial to explore data on the Web. There is a large variety of vocabularies and various data sources use different terms to identify the same concepts.