Search (1889 results, page 2 of 95)

  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Soylu, A.; Giese, M.; Jimenez-Ruiz, E.; Kharlamov, E.; Zheleznyakov, D.; Horrocks, I.: Towards exploiting query history for adaptive ontology-based visual query formulation (2014) 0.02
    0.017144224 = product of:
      0.08572112 = sum of:
        0.07789783 = weight(_text_:log in 1576) [ClassicSimilarity], result of:
          0.07789783 = score(doc=1576,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.42483553 = fieldWeight in 1576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=1576)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 1576) [ClassicSimilarity], result of:
              0.023469873 = score(doc=1576,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 1576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1576)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Grounded on real industrial use cases, we recently proposed an ontology-based visual query system for SPARQL, named OptiqueVQS. Ontology-based visual query systems employ ontologies and visual representations to depict the domain of interest and queries, and are promising to enable end users without any technical background to access data on their own. However, even with considerably small ontologies, the number of ontology elements to choose from increases drastically, and hence hinders usability. Therefore, in this paper, we propose a method using the log of past queries for ranking and suggesting query extensions as a user types a query, and identify emerging issues to be addressed.
    Source
    Metadata and semantics research: 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings. Eds.: S. Closs et al
  2. Xie, I.; Joo, S.: Factors affecting the selection of search tactics : tasks, knowledge, process, and systems (2012) 0.02
    0.017144224 = product of:
      0.08572112 = sum of:
        0.07789783 = weight(_text_:log in 2739) [ClassicSimilarity], result of:
          0.07789783 = score(doc=2739,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.42483553 = fieldWeight in 2739, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=2739)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 2739) [ClassicSimilarity], result of:
              0.023469873 = score(doc=2739,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 2739, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2739)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This study investigated whether and how different factors in relation to task, user-perceived knowledge, search process, and system affect users' search tactic selection. Thirty-one participants, representing the general public with their own tasks, were recruited for this study. Multiple methods were employed to collect data, including pre-questionnaire, verbal protocols, log analysis, diaries, and post-questionnaires. Statistical analysis revealed that seven factors were significantly associated with tactic selection. These factors consist of work task types, search task types, familiarity with topic, search skills, search session length, search phases, and system types. Moreover, the study also discovered, qualitatively, in what ways these factors influence the selection of search tactics. Based on the findings, the authors discuss practical implications for system design to support users' application of multiple search tactics for each factor.
    Date
    29. 1.2016 19:02:38
  3. Neunzert, H.: Mathematische Modellierung : ein "curriculum vitae" (2012) 0.02
    0.01700264 = product of:
      0.1700264 = sum of:
        0.1700264 = product of:
          0.5100792 = sum of:
            0.5100792 = weight(_text_:c3 in 2255) [ClassicSimilarity], result of:
              0.5100792 = score(doc=2255,freq=4.0), product of:
                0.2789897 = queryWeight, product of:
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.028611459 = queryNorm
                1.8283083 = fieldWeight in 2255, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2255)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Content
    Vortrag auf der Tagung "Geschichte und Modellierung", Jena, 3. Februar 2012. Vgl. unter: http://www.fmi.uni-jena.de/Fakult%C3%A4t/Institute+und+Abteilungen/Abteilung+f%C3%BCr+Didaktik/Kolloquien.html?highlight=neunzert.
  4. Jiang, J.-D.; Jiang, J.-Y.; Cheng, P.-J.: Cocluster hypothesis and ranking consistency for relevance ranking in web search (2019) 0.02
    0.016349753 = product of:
      0.08174877 = sum of:
        0.016833913 = weight(_text_:web in 5247) [ClassicSimilarity], result of:
          0.016833913 = score(doc=5247,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.18028519 = fieldWeight in 5247, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5247)
        0.06491486 = weight(_text_:log in 5247) [ClassicSimilarity], result of:
          0.06491486 = score(doc=5247,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.3540296 = fieldWeight in 5247, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5247)
      0.2 = coord(2/10)
    
    Abstract
    Conventional approaches to relevance ranking typically optimize ranking models by each query separately. The traditional cluster hypothesis also does not consider the dependency between related queries. The goal of this paper is to leverage similar search intents to perform ranking consistency so that the search performance can be improved accordingly. Different from the previous supervised approach, which learns relevance by click-through data, we propose a novel cocluster hypothesis to bridge the gap between relevance ranking and ranking consistency. A nearest-neighbors test is also designed to measure the extent to which the cocluster hypothesis holds. Based on the hypothesis, we further propose a two-stage unsupervised approach, in which two ranking heuristics and a cost function are developed to optimize the combination of consistency and uniqueness (or inconsistency). Extensive experiments have been conducted on a real and large-scale search engine log. The experimental results not only verify the applicability of the proposed cocluster hypothesis but also show that our approach is effective in boosting the retrieval performance of the commercial search engine and reaches a comparable performance to the supervised approach.
  5. Sieglerschmidt, J.: Wissensordnungen im analogen und im digitalen Zeitalter (2017) 0.02
    0.016030243 = product of:
      0.16030243 = sum of:
        0.16030243 = product of:
          0.48090726 = sum of:
            0.48090726 = weight(_text_:c3 in 4026) [ClassicSimilarity], result of:
              0.48090726 = score(doc=4026,freq=8.0), product of:
                0.2789897 = queryWeight, product of:
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.028611459 = queryNorm
                1.7237456 = fieldWeight in 4026, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  9.7509775 = idf(docFreq=6, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4026)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Content
    Vgl. unter: https://books.google.de/books?hl=de&lr=&id=0rtGDwAAQBAJ&oi=fnd&pg=PA35&dq=inhaltserschlie%C3%9Fung+OR+sacherschlie%C3%9Fung&ots=5u0TwCbFqE&sig=GGw3Coc21CINkone-6Lx8LaSAjY#v=onepage&q=inhaltserschlie%C3%9Fung%20OR%20sacherschlie%C3%9Fung&f=false.
  6. Padmavathi, T.; Krishnamurthy, M.: Semantic Web tools and techniques for knowledge organization : an overview (2017) 0.01
    0.014296171 = product of:
      0.071480855 = sum of:
        0.062353685 = weight(_text_:web in 3618) [ClassicSimilarity], result of:
          0.062353685 = score(doc=3618,freq=14.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.6677857 = fieldWeight in 3618, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3618)
        0.009127174 = product of:
          0.027381519 = sum of:
            0.027381519 = weight(_text_:29 in 3618) [ClassicSimilarity], result of:
              0.027381519 = score(doc=3618,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27205724 = fieldWeight in 3618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3618)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    The enormous amount of information generated every day and spread across the web is diversified in nature far beyond human consumption. To overcome this difficulty, the transformation of current unstructured information into a structured form called a "Semantic Web" was proposed by Tim Berners-Lee in 1989 to enable computers to understand and interpret the information they store. The aim of the semantic web is the integration of heterogeneous and distributed data spread across the web for knowledge discovery. The core of sematic web technologies includes knowledge representation languages RDF and OWL, ontology editors and reasoning tools, and ontology query languages such as SPARQL have also been discussed.
    Date
    29. 9.2017 18:30:57
    Theme
    Semantic Web
  7. Aloteibi, S.; Sanderson, M.: Analyzing geographic query reformulation : an exploratory study (2014) 0.01
    0.014275125 = product of:
      0.07137562 = sum of:
        0.06491486 = weight(_text_:log in 1177) [ClassicSimilarity], result of:
          0.06491486 = score(doc=1177,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.3540296 = fieldWeight in 1177, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1177)
        0.006460763 = product of:
          0.019382289 = sum of:
            0.019382289 = weight(_text_:22 in 1177) [ClassicSimilarity], result of:
              0.019382289 = score(doc=1177,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19345059 = fieldWeight in 1177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1177)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Search engine users typically engage in multiquery sessions in their quest to fulfill their information needs. Despite a plethora of research findings suggesting that a significant group of users look for information within a specific geographical scope, existing reformulation studies lack a focused analysis of how users reformulate geographic queries. This study comprehensively investigates the ways in which users reformulate such needs in an attempt to fill this gap in the literature. Reformulated sessions were sampled from a query log of a major search engine to extract 2,400 entries that were manually inspected to filter geo sessions. This filter identified 471 search sessions that included geographical intent, and these sessions were analyzed quantitatively and qualitatively. The results revealed that one in five of the users who reformulated their queries were looking for geographically related information. They reformulated their queries by changing the content of the query rather than the structure. Users were not following a unified sequence of modifications and instead performed a single reformulation action. However, in some cases it was possible to anticipate their next move. A number of tasks in geo modifications were identified, including standard, multi-needs, multi-places, and hybrid approaches. The research concludes that it is important to specialize query reformulation studies to focus on particular query types rather than generically analyzing them, as it is apparent that geographic queries have their special reformulation characteristics.
    Date
    26. 1.2014 18:48:22
  8. Hebestreit, S.: "Es darf keine auf ewig festgelegten IP-Adressen geben" : Internetprotokoll IPv6 (2012) 0.01
    0.014224318 = product of:
      0.07112159 = sum of:
        0.06590606 = weight(_text_:schutz in 226) [ClassicSimilarity], result of:
          0.06590606 = score(doc=226,freq=2.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.31906208 = fieldWeight in 226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.03125 = fieldNorm(doc=226)
        0.0052155275 = product of:
          0.015646582 = sum of:
            0.015646582 = weight(_text_:29 in 226) [ClassicSimilarity], result of:
              0.015646582 = score(doc=226,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.15546128 = fieldWeight in 226, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=226)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Content
    "Grundsätzliche Bedenken gegen den neuen IPv6-Standard möchte Thilo Weichert nicht formulieren. Doch der Datenschutzbeauftragte des Landes Schleswig-Holstein sieht ein wachsendes Risiko, dass mit dem neuen Standard "genaue Profile von Nutzern angelegt werden können". Genau dies gelte es aber zu verhindern. Deshalb wirbt Weichert, einer der renommiertesten Datenschützer der Republik, dafür, die Vergabe von festen IP-Adressen für technische Geräte zu verhindern. "Als Datenschützer arbeiten wir ganz massiv darauf hin, dass eben keine auf ewig festgelegten IP-Nummern vergeben werden", sagte er im Gespräch mit der Frankfurter Rundschau. Er sieht sonst die Gefahr, dass es vergleichsweise einfach wird, Nutzerprofile zu erheben. Ständiger Kennzeichenwechsel Relativ einfach könnten Informationen über Nutzer miteinander verknüpft werden. Über feste IP-Adressen sei herauszubekommen, über welches Smartphone ein Nutzer verfügt, wie alt sein Kühlschrank ist, welche Kaffeemaschine er hat und wie häufig er von seinem Heim-Rechner aus im Internet unterwegs ist. Daten, die die wenigsten Nutzer gerne preisgeben. Daten auch, an denen Industrie, Handel und Werbung sehr interessiert sind. Weicherts Vorschlag: "Die Adressen sollten weiterhin dynamisch vergeben wird." Schon bisher werden IP-Adressen immer wieder gewechselt, weil nicht für alle Nutzer zu jedem möglichen Zeitpunkt ausreichend Adressen vorhanden sind. So bewegen sich Internetnutzer quasi mit wechselndem Kennzeichen durchs Internet. Nur der Provider, der Internetanbieter, kann anhand seiner Datenbank ermitteln, welcher Anschluss sich zu einem bestimmten Zeitpunkt mit einer bestimmten IP-Adresse im Netz bewegt hat. Datenschützer sehen in diesem ständigen Wechsel einen wichtigen Schutz der Privatsphäre. Mit dem neuen Standard besteht der Engpass nicht mehr. Für jeden Nutzer und für all seine internetfähigen Geräte ist eine eigene Nummer vorrätig. Dennoch, so verlangen es die deutschen Datenschützer, sollten die Adressen weiterhin gewechselt werden. "Wir wollen die geltenden Standards von IPv4, die eine Identifizierung erschweren, deshalb fortschreiben", sagt Weichert. Die Industrie dringt auf feste IP-Adressen, weil sie ein großes Interesse an den anfallenden Daten hat - um Werbung zu schalten, um das Nutzerverhalten erfassen und die Daten auswerten zu können. "Es besteht ein echter Interessenkonflikt", sagt Weichert. Es drohe eine Auseinandersetzung zwischen den Verwertungsinteressen der Industrie an den zusätzlichen digitalen Spuren, die mit IPv6 möglich sind, und den Interessen von Datenschützern, Verbrauchern - "und hoffentlich der Politik". Einen Vorteil könnte IPv6 aber auch bieten, die Chance auf mehr Anonymität im Netz. Denn durch die viel höhere Zahl an möglichen IP-Adressen wird es künftig schwerer werden, einzelne Nutzer oder Geräte zuordnen zu können - solange IP-Nummern weiter dynamisch vergeben werden."
    Date
    9. 6.2012 17:42:29
  9. Nachreiner, T.: Akademische Soziale Netzwerke und ihre Auswirkungen auf wissenschaftliche Bibliotheken (2019) 0.01
    0.014062521 = product of:
      0.070312604 = sum of:
        0.05011191 = weight(_text_:kommunikation in 5695) [ClassicSimilarity], result of:
          0.05011191 = score(doc=5695,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.34074432 = fieldWeight in 5695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.046875 = fieldNorm(doc=5695)
        0.020200694 = weight(_text_:web in 5695) [ClassicSimilarity], result of:
          0.020200694 = score(doc=5695,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 5695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5695)
      0.2 = coord(2/10)
    
    Abstract
    Soziale Netzwerkseiten (SNS) sind seit der Jahrtausendwende ein zunehmend prägendes Phänomen im World Wide Web, das sich sukzessive über sämtliche Funktionsbereiche der Gesellschaft erstreckt und ihre Kommunikationsstrukturen beeinflusst. Auch das Wissenschaftssystem ist vom Versprechen affiziert, dass ihre 'soziale' Verknüpfungslogik die eigene Kommunikation intensivieren und mannigfaltige Optimierungsmöglichkeiten bieten könnte. Insbesondere Akademische Soziale Netzwerkseiten (ASNS) wie ResearchGate, Academia.edu und Mendeley sind der sichtbare Ausdruck einer Entwicklung, die sich im Rahmen der umfassenden Digitalisierung der Publikations- und Reputationskultur in den Wissenschaften vollzieht. Aufbauend auf einer initialen Skizze der Entwicklungstendenzen im wissenschaftlichen Publikationswesen wird zunächst die medienspezifische Funktionsökonomie von SNS skizziert und auf die ASNS übertragen. Daran anknüpfend werden anhand der Forschungsliteratur die zentralen Nutzungsmotive und -muster von Wissenschaftlern skizziert, das Verhältnis der ASNS-Nutzung zu institutionellen Open Access-Policies beleuchtet, und schließlich die Konsequenz der ASNS-Ökonomie vor dem Hintergrund szientometrischer Debatten diskutiert. Abschließend wird die Frage erörtert, welche Handlungszwänge bzw. -impulse sich hieraus für wissenschaftliche Bibliotheken ergeben können.
  10. Ohly, H.P.: Wissenskommunikation und -organisation : Quo vadis? (2010) 0.01
    0.013501792 = product of:
      0.06750896 = sum of:
        0.05846389 = weight(_text_:kommunikation in 3727) [ClassicSimilarity], result of:
          0.05846389 = score(doc=3727,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.39753503 = fieldWeight in 3727, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3727)
        0.009045068 = product of:
          0.027135205 = sum of:
            0.027135205 = weight(_text_:22 in 3727) [ClassicSimilarity], result of:
              0.027135205 = score(doc=3727,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.2708308 = fieldWeight in 3727, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3727)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Dieses Papier diskutiert die allgemeinen Entwicklungen auf dem Gebiet der Wissenskommunikation. Bereits die Betrachtung der informations- und wissensorganisatorischen Techniken lassen einige Folgerungen für den künftig zu erwartenden Wissensaustausch und seine Formalisierung ziehen. Auch wurden künftige Aspekte der Wissensorganisation und -kommunikation im Rahmen von Panels bei der Deutschen ISKO-Konferenz 2006 in Wien und in 2007 auf der IKONE-Konferenz in Bangalore sowie bei der WissKom 2007 in Jülich diskutiert. Hieraus und aus Betrachtungen zu den neuen medialen Techniken werden Folgerungen für zu erwartende und zu empfehlende künftige Entwicklungen gezogen.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  11. Avni, O.; Steinebach, M.: Digitale Wasserzeichen für textuelle Informationen (2010) 0.01
    0.013181212 = product of:
      0.13181213 = sum of:
        0.13181213 = weight(_text_:schutz in 2811) [ClassicSimilarity], result of:
          0.13181213 = score(doc=2811,freq=2.0), product of:
            0.20656188 = queryWeight, product of:
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.028611459 = queryNorm
            0.63812417 = fieldWeight in 2811, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2195506 = idf(docFreq=87, maxDocs=44218)
              0.0625 = fieldNorm(doc=2811)
      0.1 = coord(1/10)
    
    Abstract
    Digitale Wasserzeichen finden heute vielfältige Anwendungen im Bereich Multimedia, und insbesondere im Schutz der Urheberrechte eine Alternative zum Digital Rights Manangement aufgezeigt. Was allerdings für Bilder, Musik und Videos heute etabliert ist, stellt die Forschung im Bereich textueller Dokumente noch vor Herausforderungen. Wir stellen verschiedene bekannte Strategien vor, Dokumente individuell zu markieren, entweder durch Veränderung der Formatierung oder das Modifizieren des Textes selbst. Zu letztgenanntem Ansatz liefern wir eigene Ansätze, die die wenigen Freiheitsgrade der deutschen Sprache ausnutzen.
  12. Li, X.; Schijvenaars, B.J.A.; Rijke, M.de: Investigating queries and search failures in academic search (2017) 0.01
    0.013079804 = product of:
      0.06539902 = sum of:
        0.013467129 = weight(_text_:web in 5033) [ClassicSimilarity], result of:
          0.013467129 = score(doc=5033,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.14422815 = fieldWeight in 5033, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=5033)
        0.05193189 = weight(_text_:log in 5033) [ClassicSimilarity], result of:
          0.05193189 = score(doc=5033,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.2832237 = fieldWeight in 5033, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.03125 = fieldNorm(doc=5033)
      0.2 = coord(2/10)
    
    Abstract
    Academic search concerns the retrieval and profiling of information objects in the domain of academic research. In this paper we reveal important observations of academic search queries, and provide an algorithmic solution to address a type of failure during search sessions: null queries. We start by providing a general characterization of academic search queries, by analyzing a large-scale transaction log of a leading academic search engine. Unlike previous small-scale analyses of academic search queries, we find important differences with query characteristics known from web search. E.g., in academic search there is a substantially bigger proportion of entity queries, and a heavier tail in query length distribution. We then focus on search failures and, in particular, on null queries that lead to an empty search engine result page, on null sessions that contain such null queries, and on users who are prone to issue null queries. In academic search approximately 1 in 10 queries is a null query, and 25% of the sessions contain a null query. They appear in different types of search sessions, and prevent users from achieving their search goal. To address the high rate of null queries in academic search, we consider the task of providing query suggestions. Specifically we focus on a highly frequent query type: non-boolean informational queries. To this end we need to overcome query sparsity and make effective use of session information. We find that using entities helps to surface more relevant query suggestions in the face of query sparsity. We also find that query suggestions should be conditioned on the type of session in which they are offered to be more effective. After casting the session classification problem as a multi-label classification problem, we generate session-conditional query suggestions based on predicted session type. We find that this session-conditional method leads to significant improvements over a generic query suggestion method. Personalization yields very little further improvements over session-conditional query suggestions.
  13. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.01
    0.012977823 = product of:
      0.06488911 = sum of:
        0.057136193 = weight(_text_:web in 2158) [ClassicSimilarity], result of:
          0.057136193 = score(doc=2158,freq=16.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.6119082 = fieldWeight in 2158, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.0077529154 = product of:
          0.023258746 = sum of:
            0.023258746 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.023258746 = score(doc=2158,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
  14. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.01
    0.0128599135 = product of:
      0.06429957 = sum of:
        0.053868517 = weight(_text_:web in 3926) [ClassicSimilarity], result of:
          0.053868517 = score(doc=3926,freq=8.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5769126 = fieldWeight in 3926, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.010431055 = product of:
          0.031293165 = sum of:
            0.031293165 = weight(_text_:29 in 3926) [ClassicSimilarity], result of:
              0.031293165 = score(doc=3926,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.31092256 = fieldWeight in 3926, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3926)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Pages
    S.29-63
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  15. Bensman, S.J.; Smolinsky, L.J.: Lotka's inverse square law of scientific productivity : its methods and statistics (2017) 0.01
    0.012852487 = product of:
      0.12852487 = sum of:
        0.12852487 = weight(_text_:log in 3698) [ClassicSimilarity], result of:
          0.12852487 = score(doc=3698,freq=4.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.7009429 = fieldWeight in 3698, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3698)
      0.1 = coord(1/10)
    
    Abstract
    This brief communication analyzes the statistics and methods Lotka used to derive his inverse square law of scientific productivity from the standpoint of modern theory. It finds that he violated the norms of this theory by extremely truncating his data on the right. It also proves that Lotka himself played an important role in establishing the commonly used method of identifying power-law behavior by the R2 fit to a regression line on a log-log plot that modern theory considers unreliable by basing the derivation of his law on this very method.
  16. Pohl, O.: rdfedit: user supporting Web application for creating and manipulating RDF instance data (2014) 0.01
    0.012365132 = product of:
      0.061825655 = sum of:
        0.05269848 = weight(_text_:web in 1571) [ClassicSimilarity], result of:
          0.05269848 = score(doc=1571,freq=10.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5643819 = fieldWeight in 1571, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1571)
        0.009127174 = product of:
          0.027381519 = sum of:
            0.027381519 = weight(_text_:29 in 1571) [ClassicSimilarity], result of:
              0.027381519 = score(doc=1571,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27205724 = fieldWeight in 1571, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1571)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    rdfedit is a web application running on Django, rdflib and jQuery DataTables that supports novices in the field of Semantic Web technologies with the creation of RDF instance metadata. By utilizing the Semantic Web search engine Sindice, rdfedit can transform literals into URIs, fetch triples from external resources and import them into the user's local graph. Metadata experts can easily configure these features of rdfedit to fit their preferences regarding metadata schemata, so metadata creators with few knowledge about Semantic Web technologies can create RDF data in a fast and consistent manner while also following the Linked Data principles.
    Source
    Metadata and semantics research: 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings. Eds.: S. Closs et al
  17. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.01
    0.012107004 = product of:
      0.06053502 = sum of:
        0.047613494 = weight(_text_:web in 2090) [ClassicSimilarity], result of:
          0.047613494 = score(doc=2090,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5099235 = fieldWeight in 2090, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2090)
        0.012921526 = product of:
          0.038764577 = sum of:
            0.038764577 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.038764577 = score(doc=2090,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
    Theme
    Semantic Web
  18. Hjoerland, B.: Theories of knowledge organization - theories of knowledge (2017) 0.01
    0.011982392 = product of:
      0.05991196 = sum of:
        0.023567477 = weight(_text_:web in 3494) [ClassicSimilarity], result of:
          0.023567477 = score(doc=3494,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 3494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3494)
        0.036344483 = product of:
          0.054516725 = sum of:
            0.027381519 = weight(_text_:29 in 3494) [ClassicSimilarity], result of:
              0.027381519 = score(doc=3494,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27205724 = fieldWeight in 3494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3494)
            0.027135205 = weight(_text_:22 in 3494) [ClassicSimilarity], result of:
              0.027135205 = score(doc=3494,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.2708308 = fieldWeight in 3494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3494)
          0.6666667 = coord(2/3)
      0.2 = coord(2/10)
    
    Pages
    S.22-36
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  19. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.01
    0.011555167 = product of:
      0.057775833 = sum of:
        0.020200694 = weight(_text_:web in 4379) [ClassicSimilarity], result of:
          0.020200694 = score(doc=4379,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.03757514 = product of:
          0.05636271 = sum of:
            0.023469873 = weight(_text_:29 in 4379) [ClassicSimilarity], result of:
              0.023469873 = score(doc=4379,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 4379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
            0.032892838 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.032892838 = score(doc=4379,freq=4.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.6666667 = coord(2/3)
      0.2 = coord(2/10)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  20. Maemura, E.; Worby, N.; Milligan, I.; Becker, C.: If these crawls could talk : studying and documenting web archives provenance (2018) 0.01
    0.011404229 = product of:
      0.057021145 = sum of:
        0.050501734 = weight(_text_:web in 4465) [ClassicSimilarity], result of:
          0.050501734 = score(doc=4465,freq=18.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.5408555 = fieldWeight in 4465, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4465)
        0.00651941 = product of:
          0.019558229 = sum of:
            0.019558229 = weight(_text_:29 in 4465) [ClassicSimilarity], result of:
              0.019558229 = score(doc=4465,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19432661 = fieldWeight in 4465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4465)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    The increasing use and prominence of web archives raises the urgency of establishing mechanisms for transparency in the making of web archives to facilitate the process of evaluating a web archive's provenance, scoping, and absences. Some choices and process events are captured automatically, but their interactions are not currently well understood or documented. This study examined the decision space of web archives and its role in shaping what is and what is not captured in the web archiving process. By comparing how three different web archives collections were created and documented, we investigate how curatorial decisions interact with technical and external factors and we compare commonalities and differences. The findings reveal the need to understand both the social and technical context that shapes those decisions and the ways in which these individual decisions interact. Based on the study, we propose a framework for documenting key dimensions of a collection that addresses the situated nature of the organizational context, technical specificities, and unique characteristics of web materials that are the focus of a collection. The framework enables future researchers to undertake empirical work studying the process of creating web archives collections in different contexts.
    Date
    29. 9.2018 13:11:27

Languages

  • e 1490
  • d 390
  • f 2
  • i 2
  • a 1
  • sp 1
  • More… Less…

Types

  • el 145
  • b 4
  • s 1
  • x 1
  • More… Less…

Themes

Classifications