Search (2056 results, page 1 of 103)

  • × year_i:[2010 TO 2020}
  1. Becker, H.-G.: MODS2FRBRoo : Ein Tool zur Anbindung von bibliografischen Daten an eine Ontologie für Begriffe und Informationen (2010) 0.08
    0.081550285 = product of:
      0.16310057 = sum of:
        0.02956491 = weight(_text_:data in 4265) [ClassicSimilarity], result of:
          0.02956491 = score(doc=4265,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.24455236 = fieldWeight in 4265, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4265)
        0.13353567 = weight(_text_:becker in 4265) [ClassicSimilarity], result of:
          0.13353567 = score(doc=4265,freq=2.0), product of:
            0.25693014 = queryWeight, product of:
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03823278 = queryNorm
            0.51973534 = fieldWeight in 4265, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4265)
      0.5 = coord(2/4)
    
    Source
    Semantic web & linked data: Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis. Hrsg.: M. Ockenfeld
  2. Duretec, K.; Becker, C.: Format technology lifecycle analysis (2017) 0.08
    0.07917583 = product of:
      0.15835166 = sum of:
        0.04389251 = weight(_text_:data in 3836) [ClassicSimilarity], result of:
          0.04389251 = score(doc=3836,freq=6.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.3630661 = fieldWeight in 3836, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3836)
        0.11445915 = weight(_text_:becker in 3836) [ClassicSimilarity], result of:
          0.11445915 = score(doc=3836,freq=2.0), product of:
            0.25693014 = queryWeight, product of:
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03823278 = queryNorm
            0.44548744 = fieldWeight in 3836, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.046875 = fieldNorm(doc=3836)
      0.5 = coord(2/4)
    
    Abstract
    The lifecycles of format technology have been a defining concern for digital stewardship research and practice. However, little evidence exists to provide robust methods for assessing the state of any given format technology and describing its evolution over time. This article introduces relevant models from diffusion theory and market research and presents a replicable analysis method to compute models of technology evolution. Data cleansing and the combination of multiple data sources enable the application of nonlinear regression to estimate the parameters of the Bass diffusion model on format technology market lifecycles. Through its application to a longitudinal data set from the UK Web Archive, we demonstrate that the method produces reliable results and show that the Bass model can be used to describe format lifecycles. By analyzing adoption patterns across market segments, new insights are inferred about how the diffusion of formats and products such as applications occurs over time. The analysis provides a stepping stone to a more robust and evidence-based approach to model technology evolution.
  3. Naaman, M.; Becker, H.; Gravano, L.: Hip and trendy : characterizing emerging trends on Twitter (2011) 0.06
    0.058250207 = product of:
      0.116500415 = sum of:
        0.021117793 = weight(_text_:data in 4448) [ClassicSimilarity], result of:
          0.021117793 = score(doc=4448,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.17468026 = fieldWeight in 4448, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4448)
        0.09538262 = weight(_text_:becker in 4448) [ClassicSimilarity], result of:
          0.09538262 = score(doc=4448,freq=2.0), product of:
            0.25693014 = queryWeight, product of:
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03823278 = queryNorm
            0.3712395 = fieldWeight in 4448, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4448)
      0.5 = coord(2/4)
    
    Abstract
    Twitter, Facebook, and other related systems that we call social awareness streams are rapidly changing the information and communication dynamics of our society. These systems, where hundreds of millions of users share short messages in real time, expose the aggregate interests and attention of global and local communities. In particular, emerging temporal trends in these systems, especially those related to a single geographic area, are a significant and revealing source of information for, and about, a local community. This study makes two essential contributions for interpreting emerging temporal trends in these information systems. First, based on a large dataset of Twitter messages from one geographic area, we develop a taxonomy of the trends present in the data. Second, we identify important dimensions according to which trends can be categorized, as well as the key distinguishing features of trends that can be derived from their associated messages. We quantitatively examine the computed features for different categories of trends, and establish that significant differences can be detected across categories. Our study advances the understanding of trends on Twitter and other social awareness streams, which will enable powerful applications and activities, including user-driven real-time information services for local communities.
  4. Semantic web & linked data : Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis (2010) 0.05
    0.053956136 = product of:
      0.10791227 = sum of:
        0.0506827 = weight(_text_:data in 1516) [ClassicSimilarity], result of:
          0.0506827 = score(doc=1516,freq=32.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.4192326 = fieldWeight in 1516, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1516)
        0.057229575 = weight(_text_:becker in 1516) [ClassicSimilarity], result of:
          0.057229575 = score(doc=1516,freq=2.0), product of:
            0.25693014 = queryWeight, product of:
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03823278 = queryNorm
            0.22274372 = fieldWeight in 1516, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1516)
      0.5 = coord(2/4)
    
    Abstract
    Informationswissenschaft und Informatik nähern sich auf vielen Gebieten bei ihren Projekten und in ihren Zielsetzungen immer stärker an. Ganz besonders deutlich wird dies vor dem Hintergrund von Semantic Web und Linked Data. Die Textanalyse und Fixierung der Essenz einer Publikation in Form ihrer belastbaren Aussagen und zugrunde liegender Daten sowie deren Speicherung und Aufbereitung für die Nutzung gehören zu den professionellen Aktivitäten der in der DGI organisierten Informationsfachleute. Das Semantic Web versprocht, bei entsprechender intellektueller Vorarbeit, diese Tätigkeiten künftig zu erleichtern. Der Tgaungsband macht an vielen Beispielen anschaulich klar, wie die in Informationswissenschaft und -praxis über Jahrzehnte erarbeiteten Grundlagen und Werkzeuge jetzt für den Aufbau der zukünftigen Informationsinfrastrukturen eingesetzt werden können. Er vereint die Textfassungen der Beiträge, die durch das Programmkomitee für die 1. DGI-Konferenz angenommen worden sind, dazu gehören u.a. wissensbasierte Anwendungen in der Wirtschaft, Aufbau von Ontologien mittels Thesauri, Mehrsprachigkeit und Open Data.
    Content
    Enthält die Beiträge: ONTOLOGIEN UND WISSENSREPRÄSENTATIONEN: DIE VERLINKUNG ZWISCHEN INFORMATIONSSUCHENDEN UND INFORMATIONSRESSOURCEN Die Verwendung von SKOS-Daten zur semantischen Suchfragenerweiterung im Kontext des individualisierbaren Informationsportals RODIN / Fabio Ricci und Rene Schneider - Aufbau einer Testumgebung zur Ermittlung signifikanter Parameter bei der Ontologieabfrage / Sonja Öttl, Daniel Streiff, Niklaus Stettler und Martin Studer - Anforderungen an die Wissensrepräsentation im Social Semantic Web / Katrin Weller SEMANTIC WEB & LINKED DATA: WISSENSBASIERTE ANWENDUNGEN IN DER WIRTSCHAFT Semantic Web & Linked Data für professionelle Informationsangebote. Hoffnungsträger oder "alter Hut" - Eine Praxisbetrachtung für die Wirtschaftsinformationen / Ruth Göbel - Semantische wissensbasierte Suche in den Life Sciences am Beispiel von GoPubMed / Michael R. Alvers Produktion und Distribution für multimedialen Content in Form von Linked Data am Beispiel von PAUX / Michael Dreusicke DAS RÜCKRAT DES WEB DER DATEN: ONTOLOGIEN IN BIBLIOTHEKEN Linked Data aus und für Bibliotheken: Rückgratstärkung im Semantic Web / Reinhard Altenhöner, Jan Hannemann und Jürgen Kett - MODS2FRBRoo: Ein Tool zur Anbindung von bibliografischen Daten an eine Ontologie für Begriffe und Informationen im Bereich des kulturellen Erbes / Hans-Georg Becker - Suchmöglichkeiten für Vokabulare im Semantic Web / Friederike Borchert
    LINKED DATA IM GEOINFORMATIONSBEREICH - CHANCEN ODER GEFAHR? Geodaten - von der Verantwortung des Dealers / Karsten Neumann - Computergestützte Freizeitplanung basierend auf Points Of Interest / Peter Bäcker und Ugur Macit VON LINKED DATA ZU VERLINKTEN DIALOGEN Die globalisierte Semantic Web Informationswissenschaftlerin / Dierk Eichel - Kommunikation und Kontext. Überlegungen zur Entwicklung virtueller Diskursräume für die Wissenschaft / Ben Kaden und Maxi Kindling - Konzeptstudie: Die informationswissenschaftliche Zeitschrift der Zukunft / Lambert Heller und Heinz Pampel SEMANTIC WEB & LINKED DATA IM BILDUNGSWESEN Einsatz von Semantic Web-Technologien am Informationszentrum Bildung / Carola Carstens und Marc Rittberger - Bedarfsgerecht, kontextbezogen, qualitätsgesichert: Von der Information zum Wertschöpfungsfaktor Wissen am Beispiel einer Wissenslandkarte als dynamisches System zur Repräsentation des Wissens in der Berufsbildungsforschung / Sandra Dücker und Markus Linten - Virtuelle Forschungsumgebungen und Forschungsdaten für Lehre und Forschung: Informationsinfrastrukturen für die (Natur-)Wissenschaften / Matthias Schulze
    OPEN DATA - OPENS PROBLEMS? Challenges and Opportunities in Social Science Research Data Management / Stefan Kramer - Aktivitäten von GESIS im Kontext von Open Data und Zugang zu sozialwissenschaftlichen Forschungsergebnissen / Anja Wilde, Agnieszka Wenninger, Oliver Hopt, Philipp Schaer und Benjamin Zapilko NUTZER UND NUTZUNG IM ZEITALTER VON SEMANTIC WEB & LINKED DATA Die Erfassung, Nutzung und Weiterverwendung von statistischen Informationen - Erfahrungsbericht / Doris Stärk - Einsatz semantischer Technologien zur Entwicklung eines Lerntrajektoriengenerators in frei zugänglichen, nicht personalisierenden Lernplattformen / Richard Huber, Adrian Paschke, Georges Awad und Kirsten Hantelmann OPEN DATA: KONZEPTE - NUTZUNG - ZUKUNFT Zur Konzeption und Implementierung einer Infrastruktur für freie bibliographische Daten / Adrian Pohl und Felix Ostrowski - Lösung zum multilingualen Wissensmanagement semantischer Informationen / Lars Ludwig - Linked Open Projects: Nachnutzung von Projektergebnissen als Linked Data / Kai Eckert AUSBLICK INFORMATIONSKOMPETENZ GMMIK ['gi-mik] - Ein Modell der Informationskompetenz / Aleksander Knauerhase WORKSHOP Wissensdiagnostik als Instrument für Lernempfehlungen am Beispiel der Facharztprüfung / Werner Povoden, Sabine Povoden und Roland Streule
  5. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.05
    0.05147969 = product of:
      0.10295938 = sum of:
        0.060723793 = product of:
          0.30361897 = sum of:
            0.30361897 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.30361897 = score(doc=1826,freq=2.0), product of:
                0.32413796 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03823278 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.2 = coord(1/5)
        0.042235587 = weight(_text_:data in 1826) [ClassicSimilarity], result of:
          0.042235587 = score(doc=1826,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.34936053 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(2/4)
    
    Content
    Präsentation anlässlich: European Conference on Data Analysis (ECDA 2014) in Bremen, Germany, July 2nd to 4th 2014, LIS-Workshop.
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  6. Shaw, R.; Golden, P.; Buckland, M.: Using linked library data in working research notes (2015) 0.05
    0.051378123 = product of:
      0.10275625 = sum of:
        0.071676165 = weight(_text_:data in 2555) [ClassicSimilarity], result of:
          0.071676165 = score(doc=2555,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.5928845 = fieldWeight in 2555, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=2555)
        0.031080082 = product of:
          0.062160164 = sum of:
            0.062160164 = weight(_text_:22 in 2555) [ClassicSimilarity], result of:
              0.062160164 = score(doc=2555,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.46428138 = fieldWeight in 2555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2555)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    15. 1.2016 19:22:28
    Source
    Linked data and user interaction: the road ahead. Eds.: Cervone, H.F. u. L.G. Svensson
  7. He, L.; Nahar, V.: Reuse of scientific data in academic publications : an investigation of Dryad Digital Repository (2016) 0.05
    0.047838215 = product of:
      0.09567643 = sum of:
        0.08013639 = weight(_text_:data in 3072) [ClassicSimilarity], result of:
          0.08013639 = score(doc=3072,freq=20.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.662865 = fieldWeight in 3072, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3072)
        0.015540041 = product of:
          0.031080082 = sum of:
            0.031080082 = weight(_text_:22 in 3072) [ClassicSimilarity], result of:
              0.031080082 = score(doc=3072,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.23214069 = fieldWeight in 3072, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3072)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - In recent years, a large number of data repositories have been built and used. However, the extent to which scientific data are re-used in academic publications is still unknown. The purpose of this paper is to explore the functions of re-used scientific data in scholarly publication in different fields. Design/methodology/approach - To address these questions, the authors identified 827 publications citing resources in the Dryad Digital Repository indexed by Scopus from 2010 to 2015. Findings - The results show that: the number of citations to scientific data increases sharply over the years, but mainly from data-intensive disciplines, such as agricultural, biology science, environment science and medicine; the majority of citations are from the originating articles; and researchers tend to reuse data produced by their own research groups. Research limitations/implications - Dryad data may be re-used without being formally cited. Originality/value - The conservatism in data sharing suggests that more should be done to encourage researchers to re-use other's data.
    Date
    20. 1.2015 18:30:22
  8. Cronin, B.: Thinking about data (2013) 0.05
    0.04769496 = product of:
      0.09538992 = sum of:
        0.05912982 = weight(_text_:data in 4347) [ClassicSimilarity], result of:
          0.05912982 = score(doc=4347,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.48910472 = fieldWeight in 4347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.109375 = fieldNorm(doc=4347)
        0.0362601 = product of:
          0.0725202 = sum of:
            0.0725202 = weight(_text_:22 in 4347) [ClassicSimilarity], result of:
              0.0725202 = score(doc=4347,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.5416616 = fieldWeight in 4347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4347)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 3.2013 16:18:36
  9. Becker, M.: Auf dem Weg zum Psychotherapie-Bot (2018) 0.05
    0.04769131 = product of:
      0.19076525 = sum of:
        0.19076525 = weight(_text_:becker in 4070) [ClassicSimilarity], result of:
          0.19076525 = score(doc=4070,freq=2.0), product of:
            0.25693014 = queryWeight, product of:
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03823278 = queryNorm
            0.742479 = fieldWeight in 4070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.078125 = fieldNorm(doc=4070)
      0.25 = coord(1/4)
    
  10. Schnetker, M.F.J.; Becker, M.: Transhumanismus : Von der Technikverehrung zur Mythologie (2019) 0.05
    0.04769131 = product of:
      0.19076525 = sum of:
        0.19076525 = weight(_text_:becker in 5308) [ClassicSimilarity], result of:
          0.19076525 = score(doc=5308,freq=2.0), product of:
            0.25693014 = queryWeight, product of:
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.03823278 = queryNorm
            0.742479 = fieldWeight in 5308, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7201533 = idf(docFreq=144, maxDocs=44218)
              0.078125 = fieldNorm(doc=5308)
      0.25 = coord(1/4)
    
  11. Salaba, A.; Zeng, M.L.: Extending the "Explore" user task beyond subject authority data into the linked data sphere (2014) 0.05
    0.045274492 = product of:
      0.090548985 = sum of:
        0.072418936 = weight(_text_:data in 1465) [ClassicSimilarity], result of:
          0.072418936 = score(doc=1465,freq=12.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.59902847 = fieldWeight in 1465, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1465)
        0.01813005 = product of:
          0.0362601 = sum of:
            0.0362601 = weight(_text_:22 in 1465) [ClassicSimilarity], result of:
              0.0362601 = score(doc=1465,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.2708308 = fieldWeight in 1465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1465)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    "Explore" is a user task introduced in the Functional Requirements for Subject Authority Data (FRSAD) final report. Through various case scenarios, the authors discuss how structured data, presented based on Linked Data principles and using knowledge organisation systems (KOS) as the backbone, extend the explore task within and beyond subject authority data.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  12. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.04
    0.043052107 = product of:
      0.086104214 = sum of:
        0.07315418 = weight(_text_:data in 1605) [ClassicSimilarity], result of:
          0.07315418 = score(doc=1605,freq=24.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.60511017 = fieldWeight in 1605, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1605)
        0.012950035 = product of:
          0.02590007 = sum of:
            0.02590007 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.02590007 = score(doc=1605,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Numerous studies have explored the possibility of uncovering information from web search queries but few have examined the factors that affect web query data sources. We conducted a study that investigated this issue by comparing Google Trends and Baidu Index. Data from these two services are based on queries entered by users into Google and Baidu, two of the largest search engines in the world. We first compared the features and functions of the two services based on documents and extensive testing. We then carried out an empirical study that collected query volume data from the two sources. We found that data from both sources could be used to predict the quality of Chinese universities and companies. Despite the differences between the two services in terms of technology, such as differing methods of language processing, the search volume data from the two were highly correlated and combining the two data sources did not improve the predictive power of the data. However, there was a major difference between the two in terms of data availability. Baidu Index was able to provide more search volume data than Google Trends did. Our analysis showed that the disadvantage of Google Trends in this regard was due to Google's smaller user base in China. The implication of this finding goes beyond China. Google's user bases in many countries are smaller than that in China, so the search volume data related to those countries could result in the same issue as that related to China.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
    Theme
    Data Mining
  13. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.04
    0.043052107 = product of:
      0.086104214 = sum of:
        0.07315418 = weight(_text_:data in 5011) [ClassicSimilarity], result of:
          0.07315418 = score(doc=5011,freq=24.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.60511017 = fieldWeight in 5011, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5011)
        0.012950035 = product of:
          0.02590007 = sum of:
            0.02590007 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
              0.02590007 = score(doc=5011,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.19345059 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The present challenge faced by scientists working with Big Data comes in the overwhelming volume and level of detail provided by current data sets. Exceeding traditional empirical approaches, Big Data opens a new perspective on scientific work in which data comes to play a role in the development of the scientific problematic to be developed. Addressing this reconfiguration of our relationship with data through readings of Wittgenstein, Macherey, and Popper, we propose a picture of science that encourages scientists to engage with the data in a direct way, using the data itself as an instrument for scientific investigation. Using GIS as a theme, we develop the concept of cyber-human systems of thought and understanding to bridge the divide between representative (theoretical) thinking and (non-theoretical) data-driven science. At the foundation of these systems, we invoke the concept of the "semantic pixel" to establish a logical and virtual space linking data and the work of scientists. It is with this discussion of the relationship between analysts in their pursuit of knowledge and the rise of Big Data that this present discussion of the philosophical foundations of Big Data addresses the central questions raised by social informatics research.
    Date
    7. 3.2019 16:32:22
    Theme
    Data Mining
  14. Badia, A.: Data, information, knowledge : an information science analysis (2014) 0.04
    0.0421196 = product of:
      0.0842392 = sum of:
        0.06610915 = weight(_text_:data in 1296) [ClassicSimilarity], result of:
          0.06610915 = score(doc=1296,freq=10.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.5468357 = fieldWeight in 1296, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1296)
        0.01813005 = product of:
          0.0362601 = sum of:
            0.0362601 = weight(_text_:22 in 1296) [ClassicSimilarity], result of:
              0.0362601 = score(doc=1296,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.2708308 = fieldWeight in 1296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1296)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    I analyze the text of an article that appeared in this journal in 2007 that published the results of a questionnaire in which a number of experts were asked to define the concepts of data, information, and knowledge. I apply standard information retrieval techniques to build a list of the most frequent terms in each set of definitions. I then apply information extraction techniques to analyze how the top terms are used in the definitions. As a result, I draw data-driven conclusions about the aggregate opinion of the experts. I contrast this with the original analysis of the data to provide readers with an alternative viewpoint on what the data tell us.
    Date
    16. 6.2014 19:22:57
  15. Parka, A.L.; Panchyshyn, R.S.: ¬The path to an RDA hybridized catalog : lessons from the Kent State University Libraries' RDA enrichment project (2016) 0.04
    0.0421196 = product of:
      0.0842392 = sum of:
        0.06610915 = weight(_text_:data in 2632) [ClassicSimilarity], result of:
          0.06610915 = score(doc=2632,freq=10.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.5468357 = fieldWeight in 2632, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2632)
        0.01813005 = product of:
          0.0362601 = sum of:
            0.0362601 = weight(_text_:22 in 2632) [ClassicSimilarity], result of:
              0.0362601 = score(doc=2632,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.2708308 = fieldWeight in 2632, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2632)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article describes in detail the library implementation of a Resource Description and Access (RDA) Enrichment project. The library "hybridized," or enriched legacy data from Anglo-American Cataloguing Rules bibliographic records by the addition of specific RDA elements. The project also cleaned up various other elements in the bibliographic data that were not directly RDA-related. There were over 28 million changes and edits made to these records, changes that would never have been made otherwise because the library lacked the resources to do them independently. The enrichment project made the bibliographic data consistent, and helped prepared the data for its eventual transition to a linked data environment.
    Date
    21. 1.2016 19:08:22
  16. Borgman, C.L.: ¬The conundrum of sharing research data (2012) 0.04
    0.041494917 = product of:
      0.082989834 = sum of:
        0.0700398 = weight(_text_:data in 248) [ClassicSimilarity], result of:
          0.0700398 = score(doc=248,freq=22.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.5793489 = fieldWeight in 248, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=248)
        0.012950035 = product of:
          0.02590007 = sum of:
            0.02590007 = weight(_text_:22 in 248) [ClassicSimilarity], result of:
              0.02590007 = score(doc=248,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.19345059 = fieldWeight in 248, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=248)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Researchers are producing an unprecedented deluge of data by using new methods and instrumentation. Others may wish to mine these data for new discoveries and innovations. However, research data are not readily available as sharing is common in only a few fields such as astronomy and genomics. Data sharing practices in other fields vary widely. Moreover, research data take many forms, are handled in many ways, using many approaches, and often are difficult to interpret once removed from their initial context. Data sharing is thus a conundrum. Four rationales for sharing data are examined, drawing examples from the sciences, social sciences, and humanities: (1) to reproduce or to verify research, (2) to make results of publicly funded research available to the public, (3) to enable others to ask new questions of extant data, and (4) to advance the state of research and innovation. These rationales differ by the arguments for sharing, by beneficiaries, and by the motivations and incentives of the many stakeholders involved. The challenges are to understand which data might be shared, by whom, with whom, under what conditions, why, and to what effects. Answers will inform data policy and practice.
    Date
    11. 6.2012 15:22:29
  17. Eschenfelder, K.R.; Johnson, A.: Managing the data commons : controlled sharing of scholarly data (2014) 0.04
    0.04129348 = product of:
      0.08258696 = sum of:
        0.06704692 = weight(_text_:data in 1341) [ClassicSimilarity], result of:
          0.06704692 = score(doc=1341,freq=14.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.55459267 = fieldWeight in 1341, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1341)
        0.015540041 = product of:
          0.031080082 = sum of:
            0.031080082 = weight(_text_:22 in 1341) [ClassicSimilarity], result of:
              0.031080082 = score(doc=1341,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.23214069 = fieldWeight in 1341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1341)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper describes the range and variation in access and use control policies and tools used by 24 web-based data repositories across a variety of fields. It also describes the rationale provided by repositories for their decisions to control data or provide means for depositors to do so. Using a purposive exploratory sample, we employed content analysis of repository website documentation, a web survey of repository managers, and selected follow-up interviews to generate data. Our results describe the range and variation in access and use control policies and tools employed, identifying both commonalities and distinctions across repositories. Using concepts from commons theory as a guiding theoretical framework, our analysis describes the following five dimensions of repository rules, or data commons boundaries: locus of decision making (depositor vs. repository), degree of variation in terms of use within the repository, the mission of the repository in relation to its scholarly field, what use means in relation to specific sorts of data, and types of exclusion.
    Date
    22. 8.2014 16:56:41
  18. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.04
    0.04088139 = product of:
      0.08176278 = sum of:
        0.0506827 = weight(_text_:data in 1239) [ClassicSimilarity], result of:
          0.0506827 = score(doc=1239,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.4192326 = fieldWeight in 1239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=1239)
        0.031080082 = product of:
          0.062160164 = sum of:
            0.062160164 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.062160164 = score(doc=1239,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    18. 3.2014 19:13:22
  19. Pohl, A.; Danowski, P.: Linked Open Data in der Bibliothekswelt : Überblick und Herausforderungen (2015) 0.04
    0.04088139 = product of:
      0.08176278 = sum of:
        0.0506827 = weight(_text_:data in 2057) [ClassicSimilarity], result of:
          0.0506827 = score(doc=2057,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.4192326 = fieldWeight in 2057, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=2057)
        0.031080082 = product of:
          0.062160164 = sum of:
            0.062160164 = weight(_text_:22 in 2057) [ClassicSimilarity], result of:
              0.062160164 = score(doc=2057,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.46428138 = fieldWeight in 2057, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2057)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    26. 8.2015 10:22:00
  20. De Luca, E.W.; Dahlberg, I.: Including knowledge domains from the ICC into the multilingual lexical linked data cloud (2014) 0.04
    0.040833745 = product of:
      0.08166749 = sum of:
        0.063353375 = weight(_text_:data in 1493) [ClassicSimilarity], result of:
          0.063353375 = score(doc=1493,freq=18.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.52404076 = fieldWeight in 1493, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1493)
        0.018314114 = product of:
          0.036628228 = sum of:
            0.036628228 = weight(_text_:22 in 1493) [ClassicSimilarity], result of:
              0.036628228 = score(doc=1493,freq=4.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.27358043 = fieldWeight in 1493, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1493)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A lot of information that is already available on the Web, or retrieved from local information systems and social networks is structured in data silos that are not semantically related. Semantic technologies make it emerge that the use of typed links that directly express their relations are an advantage for every application that can reuse the incorporated knowledge about the data. For this reason, data integration, through reengineering (e.g. triplify), or querying (e.g. D2R) is an important task in order to make information available for everyone. Thus, in order to build a semantic map of the data, we need knowledge about data items itself and the relation between heterogeneous data items. In this paper, we present our work of providing Lexical Linked Data (LLD) through a meta-model that contains all the resources and gives the possibility to retrieve and navigate them from different perspectives. We combine the existing work done on knowledge domains (based on the Information Coding Classification) within the Multilingual Lexical Linked Data Cloud (based on the RDF/OWL EurowordNet and the related integrated lexical resources (MultiWordNet, EuroWordNet, MEMODATA Lexicon, Hamburg Methaphor DB).
    Date
    22. 9.2014 19:01:18
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik

Languages

  • e 1715
  • d 320
  • f 2
  • a 1
  • hu 1
  • i 1
  • pt 1
  • More… Less…

Types

  • a 1775
  • el 227
  • m 157
  • s 62
  • x 30
  • r 14
  • b 5
  • i 2
  • p 2
  • n 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications