Search (91 results, page 1 of 5)

  • × year_i:[2010 TO 2020}
  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.57
    0.56524754 = product of:
      0.9891832 = sum of:
        0.09891832 = product of:
          0.29675496 = sum of:
            0.29675496 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.29675496 = score(doc=1826,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.29675496 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.29675496 = score(doc=1826,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.29675496 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.29675496 = score(doc=1826,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.29675496 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.29675496 = score(doc=1826,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5714286 = coord(4/7)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.28
    0.28262377 = product of:
      0.4945916 = sum of:
        0.04945916 = product of:
          0.14837748 = sum of:
            0.14837748 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.14837748 = score(doc=4388,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.14837748 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.14837748 = score(doc=4388,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.14837748 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.14837748 = score(doc=4388,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.14837748 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.14837748 = score(doc=4388,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.5714286 = coord(4/7)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  3. Reichmann, W.: Open Science zwischen sozialen Strukturen und Wissenskulturen : eine wissenschaftssoziologische Erweiterung (2017) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 3419) [ClassicSimilarity], result of:
          0.081282035 = score(doc=3419,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 3419, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=3419)
      0.14285715 = coord(1/7)
    
    Abstract
    Der vorliegende Beitrag plädiert für eine differenzierte Interpretation der Open-Science-Idee, nämlich sowohl als umfassendes strukturelles als auch als kulturelles Phänomen. In der öffentlichen Diskussion wird Open Science oftmals auf die strukturelle Öffnung des Publikationsmarktes für die Nachfrageseite reduziert. Dabei wird vernachlässigt, dass Wissenschaft auch aus darüberhinausgehenden Strukturen besteht, beispielsweise der Sozialstruktur wissenschaftlicher Gemeinden, bei denen Mechanismen der Schließung und Öffnung zu beobachten sind. Open Science sollte darüber hinaus als kulturelles Phänomen interpretiert werden. Unter Verwendung des Begriffs "Wissenskulturen" zeigt der Beitrag, dass sich Open Science in der wissenschaftlichen Praxis als prozesshaftes und heterogenes Phänomen darstellt und dass Offenheit für verschiedene Gruppen der wissenschaftlichen Gemeinschaft unterschiedliche Bedeutungen aufweist.
  4. Laaff, M.: Googles genialer Urahn (2011) 0.01
    0.011484614 = product of:
      0.040196147 = sum of:
        0.033867512 = weight(_text_:interpretation in 4610) [ClassicSimilarity], result of:
          0.033867512 = score(doc=4610,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.1582201 = fieldWeight in 4610, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4610)
        0.006328635 = product of:
          0.01265727 = sum of:
            0.01265727 = weight(_text_:22 in 4610) [ClassicSimilarity], result of:
              0.01265727 = score(doc=4610,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.09672529 = fieldWeight in 4610, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4610)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Content
    Der Traum vom dynamischen, ständig wachsenden Wissensnetz Auch, weil Otlet bereits darüber nachdachte, wie in seinem vernetzten Wissenskatalog Anmerkungen einfließen könnten, die Fehler korrigieren oder Widerspruch abbilden. Vor dieser Analogie warnt jedoch Charles van den Heuvel von der Königlichen Niederländischen Akademie der Künste und Wissenschaften: Seiner Interpretation zufolge schwebte Otlet ein System vor, in dem Wissen hierarchisch geordnet ist: Nur eine kleine Gruppe von Wissenschaftlern sollte an der Einordnung von Wissen arbeiten; Bearbeitungen und Anmerkungen sollten, anders etwa als bei der Wikipedia, nicht mit der Information verschmelzen, sondern sie lediglich ergänzen. Das Netz, das Otlet sich ausmalte, ging weit über das World Wide Web mit seiner Hypertext-Struktur hinaus. Otlet wollte nicht nur Informationen miteinander verbunden werden - die Links sollten noch zusätzlich mit Bedeutung aufgeladen werden. Viele Experten sind sich einig, dass diese Idee von Otlet viele Parallelen zu dem Konzept des "semantischen Netz" aufweist. Dessen Ziel ist es, die Bedeutung von Informationen für Rechner verwertbar zu machen - so dass Informationen von ihnen interpretiert werden und maschinell weiterverarbeitet werden können. Projekte, die sich an einer Verwirklichung des semantischen Netzes versuchen, könnten von einem Blick auf Otlets Konzepte profitieren, so van den Heuvel, von dessen Überlegungen zu Hierarchie und Zentralisierung in dieser Frage. Im Mundaneum in Mons arbeitet man derzeit daran, Otlets Arbeiten zu digitalisieren, um sie ins Netz zu stellen. Das dürfte zwar noch ziemlich lange dauern, warnt Archivar Gillen. Aber wenn es soweit ist, wird sich endlich Otlets Vision erfüllen: Seine Sammlung des Wissens wird der Welt zugänglich sein. Papierlos, für jeden abrufbar."
    Date
    24.10.2008 14:19:22
  5. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 4705) [ClassicSimilarity], result of:
          0.067735024 = score(doc=4705,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 4705, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4705)
      0.14285715 = coord(1/7)
    
    Abstract
    Companies, governmental agencies and scientists produce a large amount of quantitative (research) data, consisting of measurements ranging from e.g. the surface temperatures of an ocean to the viscosity of a sample of mayonnaise. Such measurements are stored in tables in e.g. spreadsheet files and research reports. To integrate and reuse such data, it is necessary to have a semantic description of the data. However, the notation used is often ambiguous, making automatic interpretation and conversion to RDF or other suitable format diffiult. For example, the table header cell "f(Hz)" refers to frequency measured in Hertz, but the symbol "f" can also refer to the unit farad or the quantities force or luminous flux. Current annotation tools for this task either work on less ambiguous data or perform a more limited task. We introduce new disambiguation strategies based on an ontology, which allows to improve performance on "sloppy" datasets not yet targeted by existing systems.
  6. Xiaoyue M.; Cahier, J.-P.: Iconic categorization with knowledge-based "icon systems" can improve collaborative KM (2011) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 4837) [ClassicSimilarity], result of:
          0.067735024 = score(doc=4837,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 4837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4837)
      0.14285715 = coord(1/7)
    
    Abstract
    Icon system could represent an efficient solution for collective iconic categorization of knowledge by providing graphical interpretation. Their pictorial characters assist visualizing the structure of text to become more understandable beyond vocabulary obstacle. In this paper we are proposing a Knowledge Engineering (KM) based iconic representation approach. We assume that these systematic icons improve collective knowledge management. Meanwhile, text (constructed under our knowledge management model - Hypertopic) helps to reduce the diversity of graphical understanding belonging to different users. This "position paper" also prepares to demonstrate our hypothesis by an "iconic social tagging" experiment which is to be accomplished in 2011 with UTT students. We describe the "socio semantic web" information portal involved in this project, and a part of the icons already designed for this experiment in Sustainability field. We have reviewed existing theoretical works on icons from various origins, which can be used to lay the foundation of robust "icons systems".
  7. Donahue, J.; Hendricks, L.A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description (2014) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 1873) [ClassicSimilarity], result of:
          0.067735024 = score(doc=1873,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 1873, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1873)
      0.14285715 = coord(1/7)
    
    Abstract
    Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
  8. Choi, I.: Visualizations of cross-cultural bibliographic classification : comparative studies of the Korean Decimal Classification and the Dewey Decimal Classification (2017) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 3869) [ClassicSimilarity], result of:
          0.067735024 = score(doc=3869,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 3869, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3869)
      0.14285715 = coord(1/7)
    
    Abstract
    The changes in KO systems induced by sociocultural influences may include those in both classificatory principles and cultural features. The proposed study will examine the Korean Decimal Classification (KDC)'s adaptation of the Dewey Decimal Classification (DDC) by comparing the two systems. This case manifests the sociocultural influences on KOSs in a cross-cultural context. Therefore, the study aims at an in-depth investigation of sociocultural influences by situating a KOS in a cross-cultural environment and examining the dynamics between two classification systems designed to organize information resources in two distinct sociocultural contexts. As a preceding stage of the comparison, the analysis was conducted on the changes that result from the meeting of different sociocultural feature in a descriptive method. The analysis aims to identify variations between the two schemes in comparison of the knowledge structures of the two classifications, in terms of the quantity of class numbers that represent concepts and their relationships in each of the individual main classes. The most effective analytic strategy to show the patterns of the comparison was visualizations of similarities and differences between the two systems. Increasing or decreasing tendencies in the class through various editions were analyzed. Comparing the compositions of the main classes and distributions of concepts in the KDC and DDC discloses the differences in their knowledge structures empirically. This phase of quantitative analysis and visualizing techniques generates empirical evidence leading to interpretation.
  9. Lee, W.-C.: Conflicts of semantic warrants in cataloging practices (2017) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 3871) [ClassicSimilarity], result of:
          0.067735024 = score(doc=3871,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 3871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3871)
      0.14285715 = coord(1/7)
    
    Abstract
    This study presents preliminary themes surfaced from an ongoing ethnographic study. The research question is: how and where do cultures influence the cataloging practices of using U.S. standards to catalog Chinese materials? The author applies warrant as a lens for evaluating knowledge representation systems, and extends the application from examining classificatory decisions to cataloging decisions. Semantic warrant as a conceptual tool allows us to recognize and name the various rationales behind cataloging decisions, grants us explanatory power, and the language to "visualize" and reflect on the conflicting priorities in cataloging practices. Through participatory observation, the author recorded the cataloging practices of two Chinese catalogers working on the same cataloging project. One of the catalogers is U.S. trained, and another cataloger is a professor of Library and Information Science from China, who is also a subject expert and a cataloger of Chinese special collections. The study shows how the catalogers describe Chinese special collections using many U.S. cataloging and classification standards but from different approaches. The author presents particular cases derived from the fieldwork, with an emphasis on the many layers presented by cultures, principles, standards, and practices of different scope, each of which may represent conflicting warrants. From this, it is made clear that the conflicts of warrants influence cataloging practice. We may view the conflicting warrants as an interpretation of the tension between different semantic warrants and the globalization and localization of cataloging standards.
  10. Gillitzer, B.: Yewno (2017) 0.01
    0.008423126 = product of:
      0.058961883 = sum of:
        0.058961883 = sum of:
          0.03871025 = weight(_text_:anwendung in 3447) [ClassicSimilarity], result of:
            0.03871025 = score(doc=3447,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.21396513 = fieldWeight in 3447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.03125 = fieldNorm(doc=3447)
          0.020251632 = weight(_text_:22 in 3447) [ClassicSimilarity], result of:
            0.020251632 = score(doc=3447,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.15476047 = fieldWeight in 3447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3447)
      0.14285715 = coord(1/7)
    
    Abstract
    "Die Bayerische Staatsbibliothek testet den semantischen "Discovery Service" Yewno als zusätzliche thematische Suchmaschine für digitale Volltexte. Der Service ist unter folgendem Link erreichbar: https://www.bsb-muenchen.de/recherche-und-service/suchen-und-finden/yewno/. Das Identifizieren von Themen, um die es in einem Text geht, basiert bei Yewno alleine auf Methoden der künstlichen Intelligenz und des maschinellen Lernens. Dabei werden sie nicht - wie bei klassischen Katalogsystemen - einem Text als Ganzem zugeordnet, sondern der jeweiligen Textstelle. Die Eingabe eines Suchwortes bzw. Themas, bei Yewno "Konzept" genannt, führt umgehend zu einer grafischen Darstellung eines semantischen Netzwerks relevanter Konzepte und ihrer inhaltlichen Zusammenhänge. So ist ein Navigieren über thematische Beziehungen bis hin zu den Fundstellen im Text möglich, die dann in sogenannten Snippets angezeigt werden. In der Test-Anwendung der Bayerischen Staatsbibliothek durchsucht Yewno aktuell 40 Millionen englischsprachige Dokumente aus Publikationen namhafter Wissenschaftsverlage wie Cambridge University Press, Oxford University Press, Wiley, Sage und Springer, sowie Dokumente, die im Open Access verfügbar sind. Nach der dreimonatigen Testphase werden zunächst die Rückmeldungen der Nutzer ausgewertet. Ob und wann dann der Schritt von der klassischen Suchmaschine zum semantischen "Discovery Service" kommt und welche Bedeutung Anwendungen wie Yewno in diesem Zusammenhang einnehmen werden, ist heute noch nicht abzusehen. Die Software Yewno wurde vom gleichnamigen Startup in Zusammenarbeit mit der Stanford University entwickelt, mit der auch die Bayerische Staatsbibliothek eng kooperiert. [Inetbib-Posting vom 22.02.2017].
    Date
    22. 2.2017 10:16:49
  11. Metrics in research : for better or worse? (2016) 0.01
    0.0077411463 = product of:
      0.05418802 = sum of:
        0.05418802 = weight(_text_:interpretation in 3312) [ClassicSimilarity], result of:
          0.05418802 = score(doc=3312,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.25315216 = fieldWeight in 3312, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=3312)
      0.14285715 = coord(1/7)
    
    Content
    Inhalt: Metrics in Research - For better or worse? / Jozica Dolenc, Philippe Hünenberger Oliver Renn - A brief visual history of research metrics / Oliver Renn, Jozica Dolenc, Joachim Schnabl - Bibliometry: The wizard of O's / Philippe Hünenberger - The grip of bibliometrics - A student perspective / Matthias Tinzl - Honesty and transparency to taxpayers is the long-term fundament for stable university funding / Wendelin J. Stark - Beyond metrics: Managing the performance of your work / Charlie Rapple - Scientific profiling instead of bibliometrics: Key performance indicators of the future / Rafael Ball - More knowledge, less numbers / Carl Philipp Rosenau - Do we really need BIBLIO-metrics to evaluate individual researchers? / Rüdiger Mutz - Using research metrics responsibly and effectively as a researcher / Peter I. Darroch, Lisa H. Colledge - Metrics in research: More (valuable) questions than answers / Urs Hugentobler - Publication of research results: Use and abuse / Wilfred F. van Gunsteren - Wanted: Transparent algorithms, interpretation skills, common sense / Eva E. Wille - Impact factors, the h-index, and citation hype - Metrics in research from the point of view of a journal editor / Renato Zenobi - Rashomon or metrics in a publisher's world / Gabriella Karger - The impact factor and I: A love-hate relationship / Jean-Christophe Leroux - Personal experiences bringing altmetrics to the academic market / Ben McLeish - Fatally attracted by numbers? / Oliver Renn - On computable numbers / Gerd Folkers, Laura Folkers - ScienceMatters - Single observation science publishing and linking observations to create an internet of science / Lawrence Rajendran.
  12. Jörs, B.: ¬Die Informationswissenschaft ist tot, es lebe die Datenwissenschaft (2019) 0.01
    0.0077411463 = product of:
      0.05418802 = sum of:
        0.05418802 = weight(_text_:interpretation in 5879) [ClassicSimilarity], result of:
          0.05418802 = score(doc=5879,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.25315216 = fieldWeight in 5879, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=5879)
      0.14285715 = coord(1/7)
    
    Abstract
    "Haben die "Daten" bzw. die "Datenwissenschaft" (Data Science) die "Information" bzw. die Informationswissenschaft obsolet gemacht? Hat die "Data Science" mit ihren KI-gestützten Instrumenten die ökonomische und technische Herrschaft über die "Daten" und damit auch über die "Informationen" und das "Wissen" übernommen? Die meist in der Informatik/Mathematik beheimatete "Data Science" hat die wissenschaftliche Führungsrolle übernommen, "Daten" in "Informationen" und "Wissen" zu transferieren." "Der Wandel von analoger zu digitaler Informationsverarbeitung hat die Informationswissenschaft im Grunde obsolet gemacht. Heute steht die Befassung mit der Kategorie "Daten" und deren kausaler Zusammenhang mit der "Wissens"-Generierung (Erkennung von Mustern und Zusammenhängen, Prognosefähigkeit usw.) und neuronalen Verarbeitung und Speicherung im Zentrum der Forschung." "Wäre die Wissenstreppe nach North auch für die Informationswissenschaft gültig, würde sie erkennen, dass die Befassung mit "Daten" und die durch Vorwissen ermöglichte Interpretation von "Daten" erst die Voraussetzungen schaffen, "Informationen" als "kontextualisierte Daten" zu verstehen, um "Informationen" strukturieren, darstellen, erzeugen und suchen zu können."
  13. Geuter, J.: Nein, Ethik kann man nicht programmieren (2018) 0.01
    0.0068430714 = product of:
      0.047901496 = sum of:
        0.047901496 = product of:
          0.09580299 = sum of:
            0.09580299 = weight(_text_:anwendung in 4428) [ClassicSimilarity], result of:
              0.09580299 = score(doc=4428,freq=4.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.5295367 = fieldWeight in 4428, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4428)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Content
    Irrtum 1: Die Anwendung von Ethik kann man in Computerprogrammen formulieren - Irrtum 2: Daten erzeugen Wahrheit, und falls nicht, braucht man einfach mehr Daten - Irrtum 3: In 20 Jahren gibt es eine künstliche Intelligenz, die genauso gut wie oder besser ist als menschliche - Irrtum 4: Diskriminierung durch Algorithmen ist schlimmer als Diskriminierung durch Menschen - Irrtum 5: Gesetze und Verträge können in Code ausgedrückt werden, um ihre Anwendung zu standardisieren - Irrtum 6: Digitale Freiheit drückt sich in der vollständigen Autonomie des Individuums aus.
  14. Eckert, K: ¬The ICE-map visualization (2011) 0.01
    0.005530036 = product of:
      0.03871025 = sum of:
        0.03871025 = product of:
          0.0774205 = sum of:
            0.0774205 = weight(_text_:anwendung in 4743) [ClassicSimilarity], result of:
              0.0774205 = score(doc=4743,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.42793027 = fieldWeight in 4743, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4743)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  15. Baierer, K.; Zumstein, P.: Verbesserung der OCR in digitalen Sammlungen von Bibliotheken (2016) 0.01
    0.005530036 = product of:
      0.03871025 = sum of:
        0.03871025 = product of:
          0.0774205 = sum of:
            0.0774205 = weight(_text_:anwendung in 2818) [ClassicSimilarity], result of:
              0.0774205 = score(doc=2818,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.42793027 = fieldWeight in 2818, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2818)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Möglichkeiten zur Verbesserung der automatischen Texterkennung (OCR) in digitalen Sammlungen insbesondere durch computerlinguistische Methoden werden beschrieben und bisherige PostOCR-Verfahren analysiert. Im Gegensatz zu diesen Möglichkeiten aus der Forschung oder aus einzelnen Projekten unterscheidet sich die momentane Anwendung von OCR in der Bibliothekspraxis wesentlich und nutzt das Potential nur teilweise aus.
  16. Winterhalter, C.: Licence to mine : ein Überblick über Rahmenbedingungen von Text and Data Mining und den aktuellen Stand der Diskussion (2016) 0.01
    0.005530036 = product of:
      0.03871025 = sum of:
        0.03871025 = product of:
          0.0774205 = sum of:
            0.0774205 = weight(_text_:anwendung in 673) [ClassicSimilarity], result of:
              0.0774205 = score(doc=673,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.42793027 = fieldWeight in 673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=673)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Der Artikel gibt einen Überblick über die Möglichkeiten der Anwendung von Text and Data Mining (TDM) und ähnlichen Verfahren auf der Grundlage bestehender Regelungen in Lizenzverträgen zu kostenpflichtigen elektronischen Ressourcen, die Debatte über zusätzliche Lizenzen für TDM am Beispiel von Elseviers TDM Policy und den Stand der Diskussion über die Einführung von Schrankenregelungen im Urheberrecht für TDM zu nichtkommerziellen wissenschaftlichen Zwecken.
  17. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.01
    0.0051143095 = product of:
      0.035800166 = sum of:
        0.035800166 = product of:
          0.07160033 = sum of:
            0.07160033 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.07160033 = score(doc=3582,freq=4.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  18. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.01
    0.005062908 = product of:
      0.035440356 = sum of:
        0.035440356 = product of:
          0.07088071 = sum of:
            0.07088071 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.07088071 = score(doc=8365,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 6.2015 16:08:38
  19. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.00
    0.0048387814 = product of:
      0.03387147 = sum of:
        0.03387147 = product of:
          0.06774294 = sum of:
            0.06774294 = weight(_text_:anwendung in 604) [ClassicSimilarity], result of:
              0.06774294 = score(doc=604,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.37443897 = fieldWeight in 604, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=604)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  20. Wiesenmüller, H.: Monografische Reihen : Abschaffung der hierarchischen Beschreibung? (2016) 0.00
    0.0048387814 = product of:
      0.03387147 = sum of:
        0.03387147 = product of:
          0.06774294 = sum of:
            0.06774294 = weight(_text_:anwendung in 3002) [ClassicSimilarity], result of:
              0.06774294 = score(doc=3002,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.37443897 = fieldWeight in 3002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3002)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Unter RDA ist es nicht mehr zwingend vorgeschrieben, gezählte monografische Reihen hierarchisch zu beschreiben. Der HeBIS-Verbund hat die hierarchische Beschreibung monografischer Reihen vor kurzem vollständig abgeschafft: "Teile aus monografischen Reihen werden analytisch unter Verzicht auf eine Verknüpfung zur Aufnahme der Reihe erschlossen", heißt es in den Verbundfestlegungen (Version 3, Stand 10.12.2015, S. 2). Soweit ich weiß, haben drei weitere Verbünde (BVB, KOBV und GBV) die Anwendung "flexibilisiert", d.h. es ihren Bibliotheken freigestellt, ob sie die hierarchische Beschreibung für monografische Reihen weiter verwenden oder nicht. Ich verfolge diese Entwicklung mit großer Sorge und, offen gesprochen, auch mit einigem Unverständnis. Denn nach meiner Einschätzung ist die hierarchische Beschreibung der analytischen nicht nur im Ergebnis merklich überlegen, sondern sie ist auch weitaus effizienter und rationeller.

Languages

  • d 61
  • e 25
  • a 1
  • More… Less…

Types

  • a 58
  • r 4
  • x 3
  • s 2
  • m 1
  • n 1
  • p 1
  • More… Less…