Search (1687 results, page 1 of 85)

  • × year_i:[2010 TO 2020}
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.12
    0.12347793 = product of:
      0.18521689 = sum of:
        0.12821928 = product of:
          0.38465783 = sum of:
            0.38465783 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.38465783 = score(doc=1826,freq=2.0), product of:
                0.41065353 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.048437484 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.05699761 = weight(_text_:web in 1826) [ClassicSimilarity], result of:
          0.05699761 = score(doc=1826,freq=2.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.36057037 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.6666667 = coord(2/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.09
    0.08547392 = product of:
      0.12821087 = sum of:
        0.10196043 = weight(_text_:web in 4331) [ClassicSimilarity], result of:
          0.10196043 = score(doc=4331,freq=10.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.6450079 = fieldWeight in 4331, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4331)
        0.02625044 = product of:
          0.05250088 = sum of:
            0.05250088 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
              0.05250088 = score(doc=4331,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.30952093 = fieldWeight in 4331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4331)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Das Semantic Web - bzw. Linked Data - hat das Potenzial, die Verfügbarkeit von Daten und Wissen, sowie den Zugriff darauf zu revolutionieren. Einen großen Beitrag dazu können Wissensorganisationssysteme wie Thesauri leisten, die die Daten inhaltlich erschließen und strukturieren. Leider sind immer noch viele dieser Systeme lediglich in Buchform oder in speziellen Anwendungen verfügbar. Wie also lassen sie sich für das Semantic Web nutzen? Das Simple Knowledge Organization System (SKOS) bietet eine Möglichkeit, die Wissensorganisationssysteme in eine Form zu "übersetzen", die im Web zitiert und mit anderen Resourcen verknüpft werden kann.
    Date
    15. 3.2011 19:21:22
    Theme
    Semantic Web
  3. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.08
    0.083823286 = product of:
      0.12573493 = sum of:
        0.07979666 = weight(_text_:web in 8365) [ClassicSimilarity], result of:
          0.07979666 = score(doc=8365,freq=2.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.50479853 = fieldWeight in 8365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=8365)
        0.045938272 = product of:
          0.091876544 = sum of:
            0.091876544 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.091876544 = score(doc=8365,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 6.2015 16:08:38
  4. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.08
    0.08152235 = product of:
      0.122283526 = sum of:
        0.102595694 = weight(_text_:web in 4649) [ClassicSimilarity], result of:
          0.102595694 = score(doc=4649,freq=18.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.64902663 = fieldWeight in 4649, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4649)
        0.01968783 = product of:
          0.03937566 = sum of:
            0.03937566 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.03937566 = score(doc=4649,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
    Theme
    Semantic Web
  5. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.08
    0.07761066 = product of:
      0.11641598 = sum of:
        0.096728146 = weight(_text_:web in 2158) [ClassicSimilarity], result of:
          0.096728146 = score(doc=2158,freq=16.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.6119082 = fieldWeight in 2158, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.01968783 = product of:
          0.03937566 = sum of:
            0.03937566 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.03937566 = score(doc=2158,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
  6. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.08
    0.07564735 = product of:
      0.11347102 = sum of:
        0.06410964 = product of:
          0.19232891 = sum of:
            0.19232891 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.19232891 = score(doc=4997,freq=2.0), product of:
                0.41065353 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.048437484 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.049361378 = weight(_text_:web in 4997) [ClassicSimilarity], result of:
          0.049361378 = score(doc=4997,freq=6.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.3122631 = fieldWeight in 4997, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.6666667 = coord(2/3)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  7. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.08
    0.07561323 = product of:
      0.113419846 = sum of:
        0.080606796 = weight(_text_:web in 2090) [ClassicSimilarity], result of:
          0.080606796 = score(doc=2090,freq=4.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.5099235 = fieldWeight in 2090, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2090)
        0.032813054 = product of:
          0.06562611 = sum of:
            0.06562611 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.06562611 = score(doc=2090,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
    Theme
    Semantic Web
  8. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.07
    0.07344583 = product of:
      0.11016874 = sum of:
        0.09048091 = weight(_text_:web in 987) [ClassicSimilarity], result of:
          0.09048091 = score(doc=987,freq=14.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.57238775 = fieldWeight in 987, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=987)
        0.01968783 = product of:
          0.03937566 = sum of:
            0.03937566 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
              0.03937566 = score(doc=987,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.23214069 = fieldWeight in 987, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=987)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This book covers the basics of semantic web technologies and indexing languages, and describes their contribution to improve languages as a tool for subject queries and knowledge exploration. The book is relevant to information scientists, knowledge workers and indexers. It provides a suitable combination of theoretical foundations and practical applications.
    Date
    23. 7.2017 13:49:22
    LCSH
    Semantic Web
    World Wide Web / Subject access
    RSWK
    Semantic Web
    Subject
    Semantic Web
    World Wide Web / Subject access
    Semantic Web
  9. Keyser, P. de: Indexing : from thesauri to the Semantic Web (2012) 0.07
    0.06897125 = product of:
      0.10345687 = sum of:
        0.08376904 = weight(_text_:web in 3197) [ClassicSimilarity], result of:
          0.08376904 = score(doc=3197,freq=12.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.5299281 = fieldWeight in 3197, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3197)
        0.01968783 = product of:
          0.03937566 = sum of:
            0.03937566 = weight(_text_:22 in 3197) [ClassicSimilarity], result of:
              0.03937566 = score(doc=3197,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.23214069 = fieldWeight in 3197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3197)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Indexing consists of both novel and more traditional techniques. Cutting-edge indexing techniques, such as automatic indexing, ontologies, and topic maps, were developed independently of older techniques such as thesauri, but it is now recognized that these older methods also hold expertise. Indexing describes various traditional and novel indexing techniques, giving information professionals and students of library and information sciences a broad and comprehensible introduction to indexing. This title consists of twelve chapters: an Introduction to subject readings and theasauri; Automatic indexing versus manual indexing; Techniques applied in automatic indexing of text material; Automatic indexing of images; The black art of indexing moving images; Automatic indexing of music; Taxonomies and ontologies; Metadata formats and indexing; Tagging; Topic maps; Indexing the web; and The Semantic Web.
    Date
    24. 8.2016 14:03:22
    RSWK
    Semantic Web
    Subject
    Semantic Web
    Theme
    Semantic Web
  10. Social Media und Web Science : das Web als Lebensraum, Düsseldorf, 22. - 23. März 2012, Proceedings, hrsg. von Marlies Ockenfeld, Isabella Peters und Katrin Weller. DGI, Frankfurt am Main 2012 (2012) 0.07
    0.06851053 = product of:
      0.10276579 = sum of:
        0.07979666 = weight(_text_:web in 1517) [ClassicSimilarity], result of:
          0.07979666 = score(doc=1517,freq=8.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.50479853 = fieldWeight in 1517, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1517)
        0.022969136 = product of:
          0.045938272 = sum of:
            0.045938272 = weight(_text_:22 in 1517) [ClassicSimilarity], result of:
              0.045938272 = score(doc=1517,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.2708308 = fieldWeight in 1517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1517)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    RSWK
    Soziale Software / World Wide Web 2.0 / Kongress / Düsseldorf <2012>
    Subject
    Soziale Software / World Wide Web 2.0 / Kongress / Düsseldorf <2012>
  11. Alqaraleh, S.; Ramadan, O.; Salamah, M.: Efficient watcher based web crawler design (2015) 0.06
    0.064675555 = product of:
      0.097013324 = sum of:
        0.080606796 = weight(_text_:web in 1627) [ClassicSimilarity], result of:
          0.080606796 = score(doc=1627,freq=16.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.5099235 = fieldWeight in 1627, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1627)
        0.016406527 = product of:
          0.032813054 = sum of:
            0.032813054 = weight(_text_:22 in 1627) [ClassicSimilarity], result of:
              0.032813054 = score(doc=1627,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.19345059 = fieldWeight in 1627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1627)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to design a watcher-based crawler (WBC) that has the ability of crawling static and dynamic web sites, and can download only the updated and newly added web pages. Design/methodology/approach In the proposed WBC crawler, a watcher file, which can be uploaded to the web sites servers, prepares a report that contains the addresses of the updated and the newly added web pages. In addition, the WBC is split into five units, where each unit is responsible for performing a specific crawling process. Findings Several experiments have been conducted and it has been observed that the proposed WBC increases the number of uniquely visited static and dynamic web sites as compared with the existing crawling techniques. In addition, the proposed watcher file not only allows the crawlers to visit the updated and newly web pages, but also solves the crawlers overlapping and communication problems. Originality/value The proposed WBC performs all crawling processes in the sense that it detects all updated and newly added pages automatically without any human explicit intervention or downloading the entire web sites.
    Date
    20. 1.2015 18:30:22
  12. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.06
    0.061738964 = product of:
      0.092608444 = sum of:
        0.06410964 = product of:
          0.19232891 = sum of:
            0.19232891 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.19232891 = score(doc=4388,freq=2.0), product of:
                0.41065353 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.048437484 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.028498804 = weight(_text_:web in 4388) [ClassicSimilarity], result of:
          0.028498804 = score(doc=4388,freq=2.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.18028519 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.6666667 = coord(2/3)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  13. Hartmann, S.; Haffner, A.: Linked-RDA-Data in der Praxis (2010) 0.06
    0.06049058 = product of:
      0.09073587 = sum of:
        0.06448543 = weight(_text_:web in 1679) [ClassicSimilarity], result of:
          0.06448543 = score(doc=1679,freq=4.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.4079388 = fieldWeight in 1679, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1679)
        0.02625044 = product of:
          0.05250088 = sum of:
            0.05250088 = weight(_text_:22 in 1679) [ClassicSimilarity], result of:
              0.05250088 = score(doc=1679,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.30952093 = fieldWeight in 1679, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1679)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Durch den neuen Erschließungsstandard "Resource Description and Access" (RDA) lassen sich bibliografische Daten sowie Normdaten Semantic-Web-konform repräsentieren. Der Vortrag soll aufzeigen, welche Auswirkungen RDA auf die Katalogisierung in Bibliotheken und den Zugang zu den erschlossenen Ressourcen im Semantic Web hat. Anhand erster Erfahrungen aus praktischen Umsetzungen wird erläutert, wie bibliografische Daten durch RDA und Linked-Data-Technologien besser zugänglich gemacht und vor allem nachgenutzt werden können.
    Date
    13. 2.2011 20:22:23
  14. Kaiser, R.; Ockenfeld, M.; Skurcz, N.: Wann versteht mich mein Computer endlich? : 1. DGI-Konfernz: Semantic Web & Linked Data - Elemente zukünftiger Informationsinfrastrukturen (2011) 0.06
    0.06049058 = product of:
      0.09073587 = sum of:
        0.06448543 = weight(_text_:web in 4392) [ClassicSimilarity], result of:
          0.06448543 = score(doc=4392,freq=4.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.4079388 = fieldWeight in 4392, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4392)
        0.02625044 = product of:
          0.05250088 = sum of:
            0.05250088 = weight(_text_:22 in 4392) [ClassicSimilarity], result of:
              0.05250088 = score(doc=4392,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.30952093 = fieldWeight in 4392, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4392)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    »Wann versteht mich mein Computer endlich?« So könnte man die Quintessenz der 1. DGI-Konferenz, ausgerichtet von der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis (DGI) anlässlich der diesjährigen Frankfurter Buchmesse zusammenfassen, in deren Rahmen zugleich die 62. DGI-Jahrestagung stattgefunden hat. Unter dem Motto »Semantic Web & Linked Data - Elemente zukünftiger Informationsinfrastrukturen« kamen vom 7. bis 9. Oktober 2010 über 400 Informationsfachleute aus Wissenschaft, Bildung, Verwaltung, Wirtschaft und Bibliotheken zusammen, um ihre Arbeiten und Erkenntnisse zur nächsten Generation der Webtechnologien vorzustellen und untereinander zu diskutieren.
    Source
    BuB. 63(2011) H.1, S.22-23
  15. Petric, K.; Petric, T.; Krisper, M.; Rajkovic, V.: User profiling on a pilot digital library with the final result of a new adaptive knowledge management solution (2011) 0.06
    0.05872331 = product of:
      0.08808496 = sum of:
        0.06839713 = weight(_text_:web in 4560) [ClassicSimilarity], result of:
          0.06839713 = score(doc=4560,freq=8.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.43268442 = fieldWeight in 4560, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4560)
        0.01968783 = product of:
          0.03937566 = sum of:
            0.03937566 = weight(_text_:22 in 4560) [ClassicSimilarity], result of:
              0.03937566 = score(doc=4560,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.23214069 = fieldWeight in 4560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4560)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this article, several procedures (e.g., measurements, information retrieval analyses, power law, association rules, hierarchical clustering) are introduced which were made on a pilot digital library. Information retrievals of web users from 01/01/2003 to 01/01/2006 on the internal search engine of the pilot digital library have been analyzed. With the power law method of data processing, a constant information retrieval pattern has been established, stable over a longer period of time. After this, the data have been analyzed. On the basis of the accomplished measurements and analyses, a series of mental models of web users for global (educational) purposes have been developed (e.g., the metamodel of thought hierarchy of web users, the segmentation model of web users), and the users were profiled in four different groups (adventurers, observers, applicable, and know-alls). The article concludes with the construction of a new knowledge management solution called multidimensional rank thesaurus.
    Date
    13. 7.2011 14:47:22
  16. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.06
    0.05872331 = product of:
      0.08808496 = sum of:
        0.06839713 = weight(_text_:web in 563) [ClassicSimilarity], result of:
          0.06839713 = score(doc=563,freq=8.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.43268442 = fieldWeight in 563, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.01968783 = product of:
          0.03937566 = sum of:
            0.03937566 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.03937566 = score(doc=563,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Date
    10. 1.2013 19:22:47
  17. Ceri, S.; Bozzon, A.; Brambilla, M.; Della Valle, E.; Fraternali, P.; Quarteroni, S.: Web Information Retrieval (2013) 0.05
    0.05434823 = product of:
      0.081522346 = sum of:
        0.06839713 = weight(_text_:web in 1082) [ClassicSimilarity], result of:
          0.06839713 = score(doc=1082,freq=18.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.43268442 = fieldWeight in 1082, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1082)
        0.01312522 = product of:
          0.02625044 = sum of:
            0.02625044 = weight(_text_:22 in 1082) [ClassicSimilarity], result of:
              0.02625044 = score(doc=1082,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.15476047 = fieldWeight in 1082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1082)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    With the proliferation of huge amounts of (heterogeneous) data on the Web, the importance of information retrieval (IR) has grown considerably over the last few years. Big players in the computer industry, such as Google, Microsoft and Yahoo!, are the primary contributors of technology for fast access to Web-based information; and searching capabilities are now integrated into most information systems, ranging from business management software and customer relationship systems to social networks and mobile phone applications. Ceri and his co-authors aim at taking their readers from the foundations of modern information retrieval to the most advanced challenges of Web IR. To this end, their book is divided into three parts. The first part addresses the principles of IR and provides a systematic and compact description of basic information retrieval techniques (including binary, vector space and probabilistic models as well as natural language search processing) before focusing on its application to the Web. Part two addresses the foundational aspects of Web IR by discussing the general architecture of search engines (with a focus on the crawling and indexing processes), describing link analysis methods (specifically Page Rank and HITS), addressing recommendation and diversification, and finally presenting advertising in search (the main source of revenues for search engines). The third and final part describes advanced aspects of Web search, each chapter providing a self-contained, up-to-date survey on current Web research directions. Topics in this part include meta-search and multi-domain search, semantic search, search in the context of multimedia data, and crowd search. The book is ideally suited to courses on information retrieval, as it covers all Web-independent foundational aspects. Its presentation is self-contained and does not require prior background knowledge. It can also be used in the context of classic courses on data management, allowing the instructor to cover both structured and unstructured data in various formats. Its classroom use is facilitated by a set of slides, which can be downloaded from www.search-computing.org.
    Date
    16.10.2013 19:22:44
  18. Chaudiron, S.; Ihadjadene, M.: Studying Web search engines from a user perspective : key concepts and main approaches (2012) 0.05
    0.053421196 = product of:
      0.08013179 = sum of:
        0.06372526 = weight(_text_:web in 109) [ClassicSimilarity], result of:
          0.06372526 = score(doc=109,freq=10.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.40312994 = fieldWeight in 109, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=109)
        0.016406527 = product of:
          0.032813054 = sum of:
            0.032813054 = weight(_text_:22 in 109) [ClassicSimilarity], result of:
              0.032813054 = score(doc=109,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.19345059 = fieldWeight in 109, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=109)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This chapter shows that the wider use of Web search engines, reconsidering the theoretical and methodological frameworks to grasp new information practices. Beginning with an overview of the recent challenges implied by the dynamic nature of the Web, this chapter then traces the information behavior related concepts in order to present the different approaches from the user perspective. The authors pay special attention to the concept of "information practice" and other related concepts such as "use", "activity", and "behavior" largely used in the literature but not always strictly defined. The authors provide an overview of user-oriented studies that are meaningful to understand the different contexts of use of electronic information access systems, focusing on five approaches: the system-oriented approaches, the theories of information seeking, the cognitive and psychological approaches, the management science approaches, and the marketing approaches. Future directions of work are then shaped, including social searching and the ethical, cultural, and political dimensions of Web search engines. The authors conclude considering the importance of Critical theory to better understand the role of Web Search engines in our modern society.
    Date
    20. 4.2012 13:22:37
  19. Firnkes, M.: Schöne neue Welt : der Content der Zukunft wird von Algorithmen bestimmt (2015) 0.05
    0.052614328 = product of:
      0.07892149 = sum of:
        0.059233658 = weight(_text_:web in 2118) [ClassicSimilarity], result of:
          0.059233658 = score(doc=2118,freq=6.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.37471575 = fieldWeight in 2118, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2118)
        0.01968783 = product of:
          0.03937566 = sum of:
            0.03937566 = weight(_text_:22 in 2118) [ClassicSimilarity], result of:
              0.03937566 = score(doc=2118,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.23214069 = fieldWeight in 2118, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2118)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Während das Internet vor noch nicht allzu langer Zeit hauptsächlich ein weiteres Informationsmedium darstellte, so explodieren die technischen Möglichkeiten derzeit förmlich. Diese stärken nicht nur den gegenseitigen Austausch der Nutzer. Sie alle vermessen unsere täglichen Gewohnheiten - auf sehr vielfältige Art und Weise. Die Mechanismen, die das gekaufte Web ausmachen, werden hierdurch komplexer. In den meisten neuen Technologien und Anwendungen verbergen sich Wege, die Verbraucherverführung zu perfektionieren. Nicht wenige davon dürften zudem für die Politik und andere Interessensverbände von Bedeutung sein, als alternativer Kanal, um Wählergruppen und Unterstützer zu mobilisieren. Das nachfolgende Kapitel nennt die wichtigsten Trends der nächsten Jahre, mitsamt ihren möglichen manipulativen Auswirkungen. Nur wenn wir beobachten, von wem die Zukunftstechniken wie genutzt werden, können wir kommerziellen Auswüchsen vorbeugen.
    Content
    Mit Verweis auf das Buch: Firnkes, M.: Das gekaufte Web: wie wir online manipuliert werden. Hannover : Heise Zeitschriften Verlag 2015. 220 S.
    Date
    5. 7.2015 22:02:31
    Theme
    Semantic Web
  20. Joint, N.: Web 2.0 and the library : a transformational technology? (2010) 0.05
    0.051740434 = product of:
      0.07761065 = sum of:
        0.06448543 = weight(_text_:web in 4202) [ClassicSimilarity], result of:
          0.06448543 = score(doc=4202,freq=16.0), product of:
            0.15807624 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.048437484 = queryNorm
            0.4079388 = fieldWeight in 4202, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4202)
        0.01312522 = product of:
          0.02625044 = sum of:
            0.02625044 = weight(_text_:22 in 4202) [ClassicSimilarity], result of:
              0.02625044 = score(doc=4202,freq=2.0), product of:
                0.16961981 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048437484 = queryNorm
                0.15476047 = fieldWeight in 4202, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4202)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - This paper is the final one in a series which has tried to give an overview of so-called transformational areas of digital library technology. The aim has been to assess how much real transformation these applications can bring about, in terms of creating genuine user benefit and also changing everyday library practice. Design/methodology/approach - The paper provides a summary of some of the legal and ethical issues associated with web 2.0 applications in libraries, associated with a brief retrospective view of some relevant literature. Findings - Although web 2.0 innovations have had a massive impact on the larger World Wide Web, the practical impact on library service delivery has been limited to date. What probably can be termed transformational in the effect of web 2.0 developments on library and information work is their effect on some underlying principles of professional practice. Research limitations/implications - The legal and ethical challenges of incorporating web 2.0 platforms into mainstream institutional service delivery need to be subject to further research, so that the risks associated with these innovations are better understood at the strategic and policy-making level. Practical implications - This paper makes some recommendations about new principles of library and information practice which will help practitioners make better sense of these innovations in their overall information environment. Social implications - The paper puts in context some of the more problematic social impacts of web 2.0 innovations, without denying the undeniable positive contribution of social networking to the sphere of human interactivity. Originality/value - This paper raises some cautionary points about web 2.0 applications without adopting a precautionary approach of total prohibition. However, none of the suggestions or analysis in this piece should be considered to constitute legal advice. If such advice is required, the reader should consult appropriate legal professionals.
    Date
    22. 1.2011 17:54:04

Languages

  • e 1283
  • d 385
  • f 3
  • i 2
  • a 1
  • hu 1
  • pt 1
  • More… Less…

Types

  • a 1421
  • el 182
  • m 148
  • s 60
  • x 36
  • r 15
  • b 5
  • i 1
  • p 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications