Search (5975 results, page 1 of 299)

  • × year_i:[2010 TO 2020}
  1. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.20
    0.2002719 = product of:
      0.7209788 = sum of:
        0.069587715 = product of:
          0.20876314 = sum of:
            0.20876314 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.20876314 = score(doc=973,freq=2.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
        0.20876314 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.20876314 = score(doc=973,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.20876314 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.20876314 = score(doc=973,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.20876314 = weight(_text_:2f in 973) [ClassicSimilarity], result of:
          0.20876314 = score(doc=973,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            1.1240361 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
        0.025101645 = weight(_text_:der in 973) [ClassicSimilarity], result of:
          0.025101645 = score(doc=973,freq=6.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.5129615 = fieldWeight in 973, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.09375 = fieldNorm(doc=973)
      0.2777778 = coord(5/18)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
    Footnote
    Inauguraldissertation zur Erlangung der Doktorwürde Vorgelegt der Philosophischen Fakultät I der Humboldt-Universität zu Berlin.
  2. Suchenwirth, L.: Sacherschliessung in Zeiten von Corona : neue Herausforderungen und Chancen (2019) 0.17
    0.1704474 = product of:
      0.51134217 = sum of:
        0.034793857 = product of:
          0.10438157 = sum of:
            0.10438157 = weight(_text_:3a in 484) [ClassicSimilarity], result of:
              0.10438157 = score(doc=484,freq=2.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                0.56201804 = fieldWeight in 484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=484)
          0.33333334 = coord(1/3)
        0.14761783 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.14761783 = score(doc=484,freq=4.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
        0.14761783 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.14761783 = score(doc=484,freq=4.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
        0.14761783 = weight(_text_:2f in 484) [ClassicSimilarity], result of:
          0.14761783 = score(doc=484,freq=4.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.7948135 = fieldWeight in 484, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
        0.0065819086 = weight(_text_:in in 484) [ClassicSimilarity], result of:
          0.0065819086 = score(doc=484,freq=12.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.22087781 = fieldWeight in 484, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
        0.027112871 = weight(_text_:der in 484) [ClassicSimilarity], result of:
          0.027112871 = score(doc=484,freq=28.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.55406165 = fieldWeight in 484, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
      0.33333334 = coord(6/18)
    
    Abstract
    Eine der wesentlichen Aufgaben als Fachreferent*in ist die inhaltliche Erschließung der neuerworbenen Literatur, sowie die Begutachtung und Auswahl der neu erschienenen Werke in den jeweiligen Fachgebieten hinsichtlich der Relevanz für die eigenen Sammlungen gemäß dem Sammlungsprofil der TU Wien Bibliothek. Im Zuge der Corona-Krise wurde die herkömmliche Arbeitsweise in der Sacherschließung beeinträchtigt, und es haben sich neue Herausforderungen, aber auch Möglichkeiten ergeben. Neben der fehlenden Autopsie der Literatur leidet auch die Sichtung und Erwerbung neuer Literatur unter der physischen Abwesenheit vom Arbeitsplatz bzw. der entsprechenden technischen Infrastruktur. Auch der persönliche Austausch mit den Kolleg*innen wurde deutlich erschwert. Neben den Schwierigkeiten wurden aber auch die neuen Handlungsoptionen auf beruflicher und individueller Ebene positiv gesehen. In dem Artikel werden unsere individuellen Erfahrungen an der TU Wien Bibliothek sowie Ausblicke in die Zukunft wiedergegeben.
    Footnote
    https%3A%2F%2Fjournals.univie.ac.at%2Findex.php%2Fvoebm%2Farticle%2Fdownload%2F5332%2F5271%2F&usg=AOvVaw2yQdFGHlmOwVls7ANCpTii.
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 73(2020) H.3/4, S.496-503
  3. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.16
    0.16284199 = product of:
      0.5862311 = sum of:
        0.05798977 = product of:
          0.1739693 = sum of:
            0.1739693 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.1739693 = score(doc=1826,freq=2.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.1739693 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.1739693 = score(doc=1826,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.1739693 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.1739693 = score(doc=1826,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.1739693 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.1739693 = score(doc=1826,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.0063334443 = weight(_text_:in in 1826) [ClassicSimilarity], result of:
          0.0063334443 = score(doc=1826,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.21253976 = fieldWeight in 1826, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.2777778 = coord(5/18)
    
    Content
    Präsentation anlässlich: European Conference on Data Analysis (ECDA 2014) in Bremen, Germany, July 2nd to 4th 2014, LIS-Workshop.
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  4. Gödert, W.; Lepsky, K.: Informationelle Kompetenz : ein humanistischer Entwurf (2019) 0.14
    0.14046405 = product of:
      0.42139214 = sum of:
        0.040592838 = product of:
          0.12177851 = sum of:
            0.12177851 = weight(_text_:3a in 5955) [ClassicSimilarity], result of:
              0.12177851 = score(doc=5955,freq=2.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                0.65568775 = fieldWeight in 5955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5955)
          0.33333334 = coord(1/3)
        0.12177851 = weight(_text_:2f in 5955) [ClassicSimilarity], result of:
          0.12177851 = score(doc=5955,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.65568775 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
        0.12177851 = weight(_text_:2f in 5955) [ClassicSimilarity], result of:
          0.12177851 = score(doc=5955,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.65568775 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
        0.12177851 = weight(_text_:2f in 5955) [ClassicSimilarity], result of:
          0.12177851 = score(doc=5955,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.65568775 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
        0.0070098387 = weight(_text_:in in 5955) [ClassicSimilarity], result of:
          0.0070098387 = score(doc=5955,freq=10.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.23523843 = fieldWeight in 5955, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
        0.008453923 = weight(_text_:der in 5955) [ClassicSimilarity], result of:
          0.008453923 = score(doc=5955,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.17275909 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
      0.33333334 = coord(6/18)
    
    Abstract
    Diskussionen um Datennetze und Informationstechnik drehen sich häufig um kompetentes Handeln. In der Publikation werden Voraussetzungen eines autonomen informationellen Handelns gezeigt: Abstrahieren, Analogien bilden, Plausibilitäten beachten, Schlussfolgern und kreativ sein. Informationelle Kompetenz ist gelebte Informationelle Autonomie. Es lassen sich Konsequenzen für ein zukünftiges Menschenbild in informationstechnischen Umgebungen ziehen.
    Footnote
    Rez. in: Philosophisch-ethische Rezensionen vom 09.11.2019 (Jürgen Czogalla), Unter: https://philosophisch-ethische-rezensionen.de/rezension/Goedert1.html. In: B.I.T. online 23(2020) H.3, S.345-347 (W. Sühl-Strohmenger) [Unter: https%3A%2F%2Fwww.b-i-t-online.de%2Fheft%2F2020-03-rezensionen.pdf&usg=AOvVaw0iY3f_zNcvEjeZ6inHVnOK]. In: Open Password Nr. 805 vom 14.08.2020 (H.-C. Hobohm) [Unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzE0MywiOGI3NjZkZmNkZjQ1IiwwLDAsMTMxLDFd].
  5. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.12
    0.120770186 = product of:
      0.43477264 = sum of:
        0.13917543 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.13917543 = score(doc=2188,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.13917543 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.13917543 = score(doc=2188,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.13917543 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.13917543 = score(doc=2188,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.003582737 = weight(_text_:in in 2188) [ClassicSimilarity], result of:
          0.003582737 = score(doc=2188,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.120230645 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.013663604 = weight(_text_:der in 2188) [ClassicSimilarity], result of:
          0.013663604 = score(doc=2188,freq=4.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.27922085 = fieldWeight in 2188, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.2777778 = coord(5/18)
    
    Abstract
    Wissenschaftliche Karrieren und Publikationen benötigen Reputation und möglichst viel Beachtung. Literatur, die diese Aufmerksamkeit findet, wird - so die gängige Annahme - häufig zitiert. Ausgehend von dieser Überlegung wurden Verfahren der Zitationsmessung entwickelt, die Auskunft über die Relevanz oder (wie im- und explizit oft auch postuliert wird) gar die Qualität einer Publikation oder eines Wissenschaftlers geben sollen.
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  6. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.11
    0.105153754 = product of:
      0.31546125 = sum of:
        0.028994884 = product of:
          0.08698465 = sum of:
            0.08698465 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.08698465 = score(doc=4388,freq=2.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.08698465 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.08698465 = score(doc=4388,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.08698465 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.08698465 = score(doc=4388,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.08698465 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.08698465 = score(doc=4388,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.0054849237 = weight(_text_:in in 4388) [ClassicSimilarity], result of:
          0.0054849237 = score(doc=4388,freq=12.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.18406484 = fieldWeight in 4388, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
        0.020027494 = weight(_text_:der in 4388) [ClassicSimilarity], result of:
          0.020027494 = score(doc=4388,freq=22.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.40926933 = fieldWeight in 4388, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.33333334 = coord(6/18)
    
    Abstract
    Werden Maschinen mit Begriffen beschrieben, die ursprünglich der Beschreibung des Menschen dienen, so liegt zunächst der Verdacht nahe, dass jene Maschinen spezifischmenschliche Fähigkeiten oder Eigenschaften besitzen. Für körperliche Fähigkeiten, die mechanisch nachgeahmt werden, hat sich in der Alltagssprache eine anthropomorphisierende Sprechweise bereits etabliert. So wird kaum in Frage gestellt, dass bestimmte Maschinen weben, backen, sich bewegen oder arbeiten können. Bei nichtkörperlichen Eigenschaften, etwa kognitiver, sozialer oder moralischer Art sieht dies jedoch anders aus. Dass mittlerweile intelligente und rechnende Maschinen im alltäglichen Sprachgebrauch Eingang gefunden haben, wäre jedoch undenkbar ohne den langjährigen Diskurs über Künstliche Intelligenz, welcher insbesondere die zweite Hälfte des vergangenen Jahrhunderts geprägt hat. In jüngster Zeit ist es der Autonomiebegriff, welcher zunehmend Verwendung zur Beschreibung neuer Technologien findet, wie etwa "autonome mobile Roboter" oder "autonome Systeme". Dem Begriff nach rekurriert die "Autonomie" jener Technologien auf eine bestimmte Art technologischen Fortschritts, die von der Fähigkeit zur Selbstgesetzgebung herrührt. Dies wirft aus philosophischer Sicht jedoch die Frage auf, wie die Selbstgesetzgebung in diesem Fall definiert ist, zumal sich der Autonomiebegriff in der Philosophie auf die politische oder moralische Selbstgesetzgebung von Menschen oder Menschengruppen beziehungsweise ihre Handlungen bezieht. Im Handbuch Robotik hingegen führt der Autor geradezu beiläufig die Bezeichnung "autonom" ein, indem er prognostiziert, dass "[.] autonome Roboter in Zukunft sogar einen Großteil der Altenbetreuung übernehmen werden."
    Content
    Redigierte Version der Magisterarbeit, Karlsruhe, KIT, Inst. für Philosophie, 2012.
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  7. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.10
    0.097396016 = product of:
      0.35062563 = sum of:
        0.034793857 = product of:
          0.10438157 = sum of:
            0.10438157 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.10438157 = score(doc=400,freq=2.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.10438157 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.10438157 = score(doc=400,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.10438157 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.10438157 = score(doc=400,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.10438157 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.10438157 = score(doc=400,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.0026870528 = weight(_text_:in in 400) [ClassicSimilarity], result of:
          0.0026870528 = score(doc=400,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.09017298 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.2777778 = coord(5/18)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  8. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.09
    0.09030259 = product of:
      0.3250893 = sum of:
        0.10438157 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.10438157 = score(doc=563,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.10438157 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.10438157 = score(doc=563,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.10438157 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.10438157 = score(doc=563,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.006008433 = weight(_text_:in in 563) [ClassicSimilarity], result of:
          0.006008433 = score(doc=563,freq=10.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.20163295 = fieldWeight in 563, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.0059361467 = product of:
          0.01780844 = sum of:
            0.01780844 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.01780844 = score(doc=563,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.33333334 = coord(1/3)
      0.2777778 = coord(5/18)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  9. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.09
    0.08994603 = product of:
      0.3238057 = sum of:
        0.023195906 = product of:
          0.069587715 = sum of:
            0.069587715 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.069587715 = score(doc=5820,freq=2.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.098411895 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.098411895 = score(doc=5820,freq=4.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.098411895 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.098411895 = score(doc=5820,freq=4.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.098411895 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.098411895 = score(doc=5820,freq=4.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.0053741056 = weight(_text_:in in 5820) [ClassicSimilarity], result of:
          0.0053741056 = score(doc=5820,freq=18.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.18034597 = fieldWeight in 5820, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.2777778 = coord(5/18)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  10. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.08
    0.08230063 = product of:
      0.29628226 = sum of:
        0.028994884 = product of:
          0.08698465 = sum of:
            0.08698465 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.08698465 = score(doc=4997,freq=2.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.08698465 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.08698465 = score(doc=4997,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.08698465 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.08698465 = score(doc=4997,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.08698465 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.08698465 = score(doc=4997,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.0063334443 = weight(_text_:in in 4997) [ClassicSimilarity], result of:
          0.0063334443 = score(doc=4997,freq=16.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.21253976 = fieldWeight in 4997, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.2777778 = coord(5/18)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  11. Piros, A.: Az ETO-jelzetek automatikus interpretálásának és elemzésének kérdései (2018) 0.08
    0.081420995 = product of:
      0.29311556 = sum of:
        0.028994884 = product of:
          0.08698465 = sum of:
            0.08698465 = weight(_text_:3a in 855) [ClassicSimilarity], result of:
              0.08698465 = score(doc=855,freq=2.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                0.46834838 = fieldWeight in 855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=855)
          0.33333334 = coord(1/3)
        0.08698465 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.08698465 = score(doc=855,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.08698465 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.08698465 = score(doc=855,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.08698465 = weight(_text_:2f in 855) [ClassicSimilarity], result of:
          0.08698465 = score(doc=855,freq=2.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.46834838 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
        0.0031667221 = weight(_text_:in in 855) [ClassicSimilarity], result of:
          0.0031667221 = score(doc=855,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.10626988 = fieldWeight in 855, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=855)
      0.2777778 = coord(5/18)
    
    Abstract
    Converting UDC numbers manually to a complex format such as the one mentioned above is an unrealistic expectation; supporting building these representations, as far as possible automatically, is a well-founded requirement. An additional advantage of this approach is that the existing records could also be processed and converted. In my dissertation I would like to prove also that it is possible to design and implement an algorithm that is able to convert pre-coordinated UDC numbers into the introduced format by identifying all their elements and revealing their whole syntactic structure as well. In my dissertation I will discuss a feasible way of building a UDC-specific XML schema for describing the most detailed and complicated UDC numbers (containing not only the common auxiliary signs and numbers, but also the different types of special auxiliaries). The schema definition is available online at: http://piros.udc-interpreter.hu#xsd. The primary goal of my research is to prove that it is possible to support building, retrieving, and analyzing UDC numbers without compromises, by taking the whole syntactic richness of the scheme by storing the UDC numbers reserving the meaning of pre-coordination. The research has also included the implementation of a software that parses UDC classmarks attended to prove that such solution can be applied automatically without any additional effort or even retrospectively on existing collections.
    Content
    Vgl. auch: New automatic interpreter for complex UDC numbers. Unter: <https%3A%2F%2Fudcc.org%2Ffiles%2FAttilaPiros_EC_36-37_2014-2015.pdf&usg=AOvVaw3kc9CwDDCWP7aArpfjrs5b>
  12. Binder, A.; Meinecke, F.C.; Bießmann, F.; Kawanabe, M.; Müller, K.-R.: Maschinelles Lernen, Mustererkennung in der Bildverarbeitung (2013) 0.07
    0.072569534 = product of:
      0.3265629 = sum of:
        0.09385616 = product of:
          0.18771233 = sum of:
            0.18771233 = weight(_text_:mustererkennung in 727) [ClassicSimilarity], result of:
              0.18771233 = score(doc=727,freq=2.0), product of:
                0.19292286 = queryWeight, product of:
                  8.806516 = idf(docFreq=17, maxDocs=44218)
                  0.021906832 = queryNorm
                0.97299165 = fieldWeight in 727, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.806516 = idf(docFreq=17, maxDocs=44218)
                  0.078125 = fieldNorm(doc=727)
          0.5 = coord(1/2)
        0.2092938 = weight(_text_:bildverarbeitung in 727) [ClassicSimilarity], result of:
          0.2092938 = score(doc=727,freq=2.0), product of:
            0.20371147 = queryWeight, product of:
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.021906832 = queryNorm
            1.0274031 = fieldWeight in 727, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.078125 = fieldNorm(doc=727)
        0.0063334443 = weight(_text_:in in 727) [ClassicSimilarity], result of:
          0.0063334443 = score(doc=727,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.21253976 = fieldWeight in 727, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=727)
        0.017079504 = weight(_text_:der in 727) [ClassicSimilarity], result of:
          0.017079504 = score(doc=727,freq=4.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.34902605 = fieldWeight in 727, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.078125 = fieldNorm(doc=727)
      0.22222222 = coord(4/18)
    
    Source
    Grundlagen der praktischen Information und Dokumentation. Handbuch zur Einführung in die Informationswissenschaft und -praxis. 6., völlig neu gefaßte Ausgabe. Hrsg. von R. Kuhlen, W. Semar u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried
  13. Laßmann, G.: Asimovs Robotergesetze : was leisten sie wirklich? (2017) 0.06
    0.057047855 = product of:
      0.17114356 = sum of:
        0.029973106 = weight(_text_:technik in 4067) [ClassicSimilarity], result of:
          0.029973106 = score(doc=4067,freq=2.0), product of:
            0.109023005 = queryWeight, product of:
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.021906832 = queryNorm
            0.2749246 = fieldWeight in 4067, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4067)
        0.057544984 = product of:
          0.11508997 = sum of:
            0.11508997 = weight(_text_:elektrotechnik in 4067) [ClassicSimilarity], result of:
              0.11508997 = score(doc=4067,freq=4.0), product of:
                0.17964433 = queryWeight, product of:
                  8.200379 = idf(docFreq=32, maxDocs=44218)
                  0.021906832 = queryNorm
                0.6406546 = fieldWeight in 4067, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  8.200379 = idf(docFreq=32, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4067)
          0.5 = coord(1/2)
        0.062723234 = product of:
          0.12544647 = sum of:
            0.12544647 = weight(_text_:elektronik in 4067) [ClassicSimilarity], result of:
              0.12544647 = score(doc=4067,freq=4.0), product of:
                0.18755299 = queryWeight, product of:
                  8.561393 = idf(docFreq=22, maxDocs=44218)
                  0.021906832 = queryNorm
                0.6688588 = fieldWeight in 4067, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  8.561393 = idf(docFreq=22, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4067)
          0.5 = coord(1/2)
        0.003878427 = weight(_text_:in in 4067) [ClassicSimilarity], result of:
          0.003878427 = score(doc=4067,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1301535 = fieldWeight in 4067, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4067)
        0.0120770335 = weight(_text_:der in 4067) [ClassicSimilarity], result of:
          0.0120770335 = score(doc=4067,freq=8.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.2467987 = fieldWeight in 4067, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4067)
        0.0049467892 = product of:
          0.014840367 = sum of:
            0.014840367 = weight(_text_:22 in 4067) [ClassicSimilarity], result of:
              0.014840367 = score(doc=4067,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19345059 = fieldWeight in 4067, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4067)
          0.33333334 = coord(1/3)
      0.33333334 = coord(6/18)
    
    Abstract
    In Simulationen treten bei Robotern innere Zustände immer dann auf, wenn Katastrophen wie der eigene Totalausfall drohen. Die bekannten Robotergesetze von Isaac Asimov sind schon mehr als 75 Jahre alt. Systematisch untersucht wurden sie bislang nicht, obgleich durch die technische Entwicklung in der Computertechnik das Erstellen von Regeln für Roboter immer drängender wird: Autonome Fahrzeug fahren auf unseren Straßen, das EU-Parlament diskutiert Roboterrechte und nicht nur der Deutsche Ethikrat befasst sich mit Pflegerobotern. Gunter Laßmann hat in seinem eBook Asimovs Robotergesetze philosophisch und technisch untersucht: Asimovs Robotergesetze. Was leisten sie wirklich?" Er erörtert, was ein Roboter ist und wie ein solcher Roboter nach dem heutigen Stand der Technik "denken" würde. Diskutiert wird, inwieweit Roboter individuell sein könnten und wer die Verantwortung für den Roboter trägt. Alle drei Robotergesetze werden erstmals simuliert, was teilweise zu unerwarteten Ergebnissen führt. Die Roboter werden durch Hinzunahme von Lernen weiterentwickelt, es werden die Unterschiede zwischen regelbasiertem Lernen und Deep Learning dargestellt und am aktuellen Beispiel erläutert.
    Classification
    621.3 Elektrotechnik, Elektronik
    Date
    21. 1.2018 19:02:22
    DDC
    621.3 Elektrotechnik, Elektronik
  14. Floridi, L.: Information: a very short introduction (2010) 0.04
    0.041068837 = product of:
      0.18480976 = sum of:
        0.098411895 = weight(_text_:nachrichtentechnik in 3270) [ClassicSimilarity], result of:
          0.098411895 = score(doc=3270,freq=4.0), product of:
            0.18572637 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021906832 = queryNorm
            0.5298757 = fieldWeight in 3270, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=3270)
        0.049205948 = product of:
          0.098411895 = sum of:
            0.098411895 = weight(_text_:nachrichtentechnik in 3270) [ClassicSimilarity], result of:
              0.098411895 = score(doc=3270,freq=4.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                0.5298757 = fieldWeight in 3270, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3270)
          0.5 = coord(1/2)
        0.004387939 = weight(_text_:in in 3270) [ClassicSimilarity], result of:
          0.004387939 = score(doc=3270,freq=12.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.14725187 = fieldWeight in 3270, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3270)
        0.032803968 = product of:
          0.098411895 = sum of:
            0.098411895 = weight(_text_:nachrichtentechnik in 3270) [ClassicSimilarity], result of:
              0.098411895 = score(doc=3270,freq=4.0), product of:
                0.18572637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021906832 = queryNorm
                0.5298757 = fieldWeight in 3270, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3270)
          0.33333334 = coord(1/3)
      0.22222222 = coord(4/18)
    
    Abstract
    We live in a society that is awash with information, but few of us really understand what information is. In this Very Short Introduction, one of the world's leading authorities on the philosophy of information and on information ethics, Luciano Floridi, offers an illuminating exploration of information as it relates to both philosophy and science. He discusses the roots of the concept of information in mathematics and science, and considers the role of information in several fields, including biology. Floridi also discusses concepts such as "Infoglut" (too much information to process) and the emergence of an information society, and he addresses the nature of information as a communication process and its place as a physical phenomenon. Perhaps more important, he explores information's meaning and value, and ends by considering the broader social and ethical issues relating to information, including problems surrounding accessibility, privacy, ownership, copyright, and open source. This book helps us understand the true meaning of the concept and how it can be used to understand our world. About the Series: Combining authority with wit, accessibility, and style, Very Short Introductions offer an introduction to some of life's most interesting topics. Written by experts for the newcomer, they demonstrate the finest contemporary thinking about the central problems and issues in hundreds of key topics, from philosophy to Freud, quantum theory to Islam.
    BK
    53.71 (Theoretische Nachrichtentechnik)
    Classification
    53.71 (Theoretische Nachrichtentechnik)
    Series
    Very short introductions : stimulating ways in to new subjects ; 225
  15. Deisseroth, K.: Lichtschalter im Gehirn (2011) 0.03
    0.031001488 = product of:
      0.1395067 = sum of:
        0.07193545 = weight(_text_:technik in 4248) [ClassicSimilarity], result of:
          0.07193545 = score(doc=4248,freq=2.0), product of:
            0.109023005 = queryWeight, product of:
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.021906832 = queryNorm
            0.659819 = fieldWeight in 4248, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.09375 = fieldNorm(doc=4248)
        0.0053741056 = weight(_text_:in in 4248) [ClassicSimilarity], result of:
          0.0053741056 = score(doc=4248,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.18034597 = fieldWeight in 4248, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=4248)
        0.01449244 = weight(_text_:der in 4248) [ClassicSimilarity], result of:
          0.01449244 = score(doc=4248,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.29615843 = fieldWeight in 4248, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.09375 = fieldNorm(doc=4248)
        0.047704708 = product of:
          0.07155706 = sum of:
            0.035940185 = weight(_text_:29 in 4248) [ClassicSimilarity], result of:
              0.035940185 = score(doc=4248,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.46638384 = fieldWeight in 4248, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4248)
            0.03561688 = weight(_text_:22 in 4248) [ClassicSimilarity], result of:
              0.03561688 = score(doc=4248,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.46428138 = fieldWeight in 4248, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4248)
          0.6666667 = coord(2/3)
      0.22222222 = coord(4/18)
    
    Abstract
    Per Optogenetik können Wissenschaftler in ungeahnter Detailgeneuigkeit untersuchen, wie unser Denkorgan funktioniert. Die neue Technik beflügelt ganze Forschungsgebiete - bis hin zur Psychiatrie.
    Source
    Spektrum der Wissenschaft. 2011, H.2, S.22-29
  16. Wolfangel, E.: ¬Die Grenzen der künstlichen Intelligenz (2016) 0.03
    0.029389411 = product of:
      0.13225235 = sum of:
        0.037381064 = product of:
          0.07476213 = sum of:
            0.07476213 = weight(_text_:bilderkennung in 4107) [ClassicSimilarity], result of:
              0.07476213 = score(doc=4107,freq=2.0), product of:
                0.2057994 = queryWeight, product of:
                  9.394302 = idf(docFreq=9, maxDocs=44218)
                  0.021906832 = queryNorm
                0.36327672 = fieldWeight in 4107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.394302 = idf(docFreq=9, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4107)
          0.5 = coord(1/2)
        0.073252834 = weight(_text_:bildverarbeitung in 4107) [ClassicSimilarity], result of:
          0.073252834 = score(doc=4107,freq=2.0), product of:
            0.20371147 = queryWeight, product of:
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.021906832 = queryNorm
            0.35959113 = fieldWeight in 4107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4107)
        0.002714899 = weight(_text_:in in 4107) [ClassicSimilarity], result of:
          0.002714899 = score(doc=4107,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.09110745 = fieldWeight in 4107, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4107)
        0.018903548 = weight(_text_:der in 4107) [ClassicSimilarity], result of:
          0.018903548 = score(doc=4107,freq=40.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.38630107 = fieldWeight in 4107, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4107)
      0.22222222 = coord(4/18)
    
    Abstract
    Der Sieg einer Google-Software über den Weltmeister im Brettspiel Go hat der Technologie der künstlichen Intelligenz gesellschaftlichen Auftrieb verschafft. Die Mühen der Ebene zeigen aber, dass die Algorithmen des maschinellen Lernens allerlei Fallen bergen - auch für ihre Entwickler.
    Content
    "Wie lernen Maschinen überhaupt? Eine zentrale Unterscheidung betrifft die Art des Lernens: Algorithmen können überwacht oder unüberwacht lernen. Ersteres wird unter anderem für Klassifikationsaufgaben genutzt: Ist beispielsweise ein Mensch auf einem Foto oder nicht? Grundlage dafür sind Trainingsdaten, anhand derer der Algorithmus auf Vorgabe eines Menschen lernt, was das richtige Ergebnis ist - auf diesen 1000 Bildern ist ein Mensch, auf diesen 1000 nicht. Hat das System für alle eventuell vorkommenden Fälle genügend Trainingsdaten, so die Idee, lernt es daraus selbst, bislang unbekannte Bilder zu klassifizieren. Alphago lernte beispielsweise unter anderem anhand von Millionen menschlicher Go-Spielzüge.
    Auch überwachtes Lernen ist kaum zu kontrollieren Dabei führt der Begriff überwachtes Lernen in die Irre: Dieses Lernen ist weit weniger zu kontrollieren, als der Begriff suggeriert. Der Algorithmus entscheidet schließlich auf eigene Faust, welche Kriterien wichtig sind für die Unterscheidung. "Deshalb ist es zentral, dass der Trainingsdatensatz repräsentativ ist für die Art von Daten, die man vorhersagen will", sagt Fred Hamprecht, Professor für Bildverarbeitung an der Universität Heidelberg. Das kann allerdings kniffelig sein. So kursiert in Forscherkreisen das Beispiel eines Systems, das darauf trainiert wurde, Panzer auf Bildern zu erkennen. Der Trainingsdatensatz bestand aus Werbebildern der Herstellerfirmen von Panzern und beliebigen anderen Bildern, auf denen kein Panzer zu sehen war. Aber das System funktionierte in der Anwendung nicht - es erkannte Panzer nicht, sondern filterte stattdessen Bilder heraus, auf denen die Sonne schien. Das Problem war schnell erkannt: Auf den Werbebildern hatte ebenfalls stets die Sonne geschienen. Das Netz hatte das als Kriterium angenommen. "Falls das Beispiel nicht wahr ist, ist es zumindest schön erfunden", sagt Hamprecht. Aber nicht alle Fehler sind so einfach zu finden. "Die Frage ist immer, woher die Daten kommen", sagt Hamprecht. Ein Sensor beispielsweise altert oder verschmutzt, Bilder werden eventuell mit der Zeit dunkler. Wer kein falsches Ergebnis haben möchte, muss diese "Datenalterung" mit einrechnen - und sich ihrer dafür erstmal bewusst sein. Auch ein falsches Ergebnis wird nicht unbedingt so einfach entdeckt: Schließlich entscheiden Algorithmen nicht nur über für Menschen offensichtlich zu erkennende Dinge wie die, ob auf einem Bild ein Panzer ist.
    Neuronale Netze lernen aus Erfahrungen Angesichts immer größerer Computer und wachsender Masse an Trainingsdaten gewinnen bei der Bilderkennung so genannte neuronale Netze immer mehr an Bedeutung. "Sie sind heute die leistungsfähigsten Mustererkennungsverfahren", sagt Hamprecht. Dabei wird die Funktionsweise des menschlichen Gehirns lose nachgeahmt: Die Netze bestehen aus mehreren Lagen mit einer festzulegenden Anzahl an Neuronen, deren Verbindungen sich verstärken oder abschwächen abhängig von den "Erfahrungen", die sie machen. Solche Erfahrungen sind beispielsweise die Trainingsdaten aus dem überwachten Lernen und das Feedback, ob zu einem Trainingsdatum die richtige oder falsche Vorhersage gemacht wurde. Dank der vielen Übungsdaten lassen sich heute sehr viel größere und tiefere Netze trainieren als noch vor einigen Jahren. Während früher ein berühmter Computer-Vision-Datensatz aus 256 Bildern und sein Nachfolger aus 1000 Bildern bestand, gibt es heute Datensätze mit einer Million gelabelter Bilder - also solche, auf denen Menschen markiert haben, was darauf zu sehen ist. Aber die Netze haben auch entscheidende Haken: "Man kann bei neuronalen Netzen schwer nachvollziehen, wie sie zu einer Entscheidung kamen", sagt Hamprecht. Zudem beruhe der Entwurf neuronaler Netze auf einer großen Willkür: Bei der Entscheidung, wie viele Lagen mit wie vielen Neuronen genutzt werden sollten, beruhe vieles auf Bauchgefühl oder auf Ausprobieren. Die Entwickler testen verschiedene Varianten und schauen, wann das beste Ergebnis entsteht. Erklären können sie ihre Entscheidung nicht. "Dieses Rumprobieren gefällt mir nicht", sagt Hamprecht, "ich gehe den Dingen lieber auf den Grund.""
    Source
    http://www.spektrum.de/news/die-grenzen-der-kuenstlichen-intelligenz/1409149
  17. Grundlagen der praktischen Information und Dokumentation : Handbuch zur Einführung in die Informationswissenschaft und -praxis (2013) 0.03
    0.027818326 = product of:
      0.12518246 = sum of:
        0.03284966 = product of:
          0.06569932 = sum of:
            0.06569932 = weight(_text_:mustererkennung in 4382) [ClassicSimilarity], result of:
              0.06569932 = score(doc=4382,freq=2.0), product of:
                0.19292286 = queryWeight, product of:
                  8.806516 = idf(docFreq=17, maxDocs=44218)
                  0.021906832 = queryNorm
                0.34054708 = fieldWeight in 4382, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.806516 = idf(docFreq=17, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4382)
          0.5 = coord(1/2)
        0.073252834 = weight(_text_:bildverarbeitung in 4382) [ClassicSimilarity], result of:
          0.073252834 = score(doc=4382,freq=2.0), product of:
            0.20371147 = queryWeight, product of:
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.021906832 = queryNorm
            0.35959113 = fieldWeight in 4382, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              9.298992 = idf(docFreq=10, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4382)
        0.0038394467 = weight(_text_:in in 4382) [ClassicSimilarity], result of:
          0.0038394467 = score(doc=4382,freq=12.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1288454 = fieldWeight in 4382, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4382)
        0.015240527 = weight(_text_:der in 4382) [ClassicSimilarity], result of:
          0.015240527 = score(doc=4382,freq=26.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.31144586 = fieldWeight in 4382, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4382)
      0.22222222 = coord(4/18)
    
    Abstract
    Seit vierzig Jahren vermittelt das Standardwerk Wissenschaftlern, Praktikern und Studierenden Grundlagen der professionellen, wissenschaftlich fundierten Informationsarbeit. Mit der 6., völlig neu gefassten Auflage reagieren die Herausgeber Rainer Kuhlen, Wolfgang Semar und Dietmar Strauch auf die erheblichen technischen, methodischen und organisatorischen Veränderungen auf dem Gebiet der Information und Dokumentation und tragen damit der raschen Entwicklung des Internets und der Informationswissenschaft Rechnung. Die insgesamt über fünfzig Beiträge sind vier Teilen - Grundlegendes (A), Methodisches (B), Informationsorganisation (C) und Informationsinfrastrukturen (D) - zugeordnet.
    Content
    Enthält die Beiträge: A: Grundlegendes Rainer Kuhlen: Information - Informationswissenschaft - Ursula Georgy: Professionalisierung in der Informationsarbeit - Thomas Hoeren: Urheberrecht und Internetrecht - Stephan Holländer, Rolf A. Tobler: Schweizer Urheberrecht im digitalen Umfeld - Gerhard Reichmann: Urheberrecht und Internetrecht: Österreich - Rainer Kuhlen: Wissensökologie - Wissen und Information als Commons (Gemeingüter) - Rainer Hammwöhner: Hypertext - Christa Womser-Hacker, Thomas Mandl: Information Seeking Behaviour (ISB) - Hans-Christoph Hobohm: Informationsverhalten (Mensch und Information) - Urs Dahinden: Methoden empirischer Sozialforschung für die Informationspraxis - Michael Seadle: Ethnografische Verfahren der Datenerhebung - Hans-Christoph Hobohm: Erhebungsmethoden in der Informationsverhaltensforschung
    B: Methodisches Bernard Bekavac: Web-Technologien - Rolf Assfalg: Metadaten - Ulrich Reimer: Wissensorganisation - Thomas Mandl: Text Mining und Data Mining - Harald Reiterer, Hans-Christian Jetter: Informationsvisualisierung - Katrin Weller: Ontologien - Stefan Gradmann: Semantic Web und Linked Open Data - Isabella Peters: Benutzerzentrierte Erschließungsverfahre - Ulrich Reimer: Empfehlungssysteme - Udo Hahn: Methodische Grundlagen der Informationslinguistik - Klaus Lepsky: Automatische Indexierung - Udo Hahn: Automatisches Abstracting - Ulrich Heid: Maschinelle Übersetzung - Bernd Ludwig: Spracherkennung - Norbert Fuhr: Modelle im Information Retrieval - Christa Womser-Hacker: Kognitives Information Retrieval - Alexander Binder, Frank C. Meinecke, Felix Bießmann, Motoaki Kawanabe, Klaus-Robert Müller: Maschinelles Lernen, Mustererkennung in der Bildverarbeitung
    C: Informationsorganisation Helmut Krcmar: Informations- und Wissensmanagement - Eberhard R. Hilf, Thomas Severiens: Vom Open Access für Dokumente und Daten zu Open Content in der Wissenschaft - Christa Womser-Hacker: Evaluierung im Information Retrieval - Joachim Griesbaum: Online-Marketing - Nicola Döring: Modelle der Computervermittelten Kommunikation - Harald Reiterer, Florian Geyer: Mensch-Computer-Interaktion - Steffen Staab: Web Science - Michael Weller, Elena Di Rosa: Lizenzierungsformen - Wolfgang Semar, Sascha Beck: Sicherheit von Informationssystemen - Stefanie Haustein, Dirk Tunger: Sziento- und bibliometrische Verfahren
    Footnote
    Auch als eBook; vgl.: http://www.degruyter.com/view/product/174371?rskey=Hzo8Fb&result=3&q=kuhlen. Rez. in: iwp 64(2013) H.6, S.375-377 (U. Herb); Mitt VOEB 66(2013) H.3/4, S.681-686 (C. Schlögl); BuB 66(2014) H.2, S.150-152 (J. Bertram)
  18. Nassehi, A.: Muster : Theorie der digitalen Gesellschaft (2019) 0.03
    0.026311068 = product of:
      0.11839981 = sum of:
        0.059343725 = weight(_text_:technik in 5426) [ClassicSimilarity], result of:
          0.059343725 = score(doc=5426,freq=16.0), product of:
            0.109023005 = queryWeight, product of:
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.021906832 = queryNorm
            0.54432297 = fieldWeight in 5426, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5426)
        0.03284966 = product of:
          0.06569932 = sum of:
            0.06569932 = weight(_text_:mustererkennung in 5426) [ClassicSimilarity], result of:
              0.06569932 = score(doc=5426,freq=2.0), product of:
                0.19292286 = queryWeight, product of:
                  8.806516 = idf(docFreq=17, maxDocs=44218)
                  0.021906832 = queryNorm
                0.34054708 = fieldWeight in 5426, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.806516 = idf(docFreq=17, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5426)
          0.5 = coord(1/2)
        0.0038394467 = weight(_text_:in in 5426) [ClassicSimilarity], result of:
          0.0038394467 = score(doc=5426,freq=12.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1288454 = fieldWeight in 5426, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5426)
        0.02236698 = weight(_text_:der in 5426) [ClassicSimilarity], result of:
          0.02236698 = score(doc=5426,freq=56.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.4570776 = fieldWeight in 5426, product of:
              7.483315 = tf(freq=56.0), with freq of:
                56.0 = termFreq=56.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5426)
      0.22222222 = coord(4/18)
    
    Abstract
    Wir glauben, der Siegeszug der digitalen Technik habe innerhalb weniger Jahre alles revolutioniert: unsere Beziehungen, unsere Arbeit und sogar die Funktionsweise demokratischer Wahlen. In seiner neuen Gesellschaftstheorie dreht der Soziologe Armin Nassehi den Spieß um und zeigt jenseits von Panik und Verharmlosung, dass die Digitalisierung nur eine besonders ausgefeilte technische Lösung für ein Problem ist, das sich in modernen Gesellschaften seit jeher stellt: Wie geht die Gesellschaft, wie gehen Unternehmen, Staaten, Verwaltungen, Strafverfolgungsbehörden, aber auch wir selbst mit unsichtbaren Mustern um? Schon seit dem 19. Jahrhundert werden in funktional ausdifferenzierten Gesellschaften statistische Mustererkennungstechnologien angewandt, um menschliche Verhaltensweisen zu erkennen, zu regulieren und zu kontrollieren. Oft genug wird die Digitalisierung unserer Lebenswelt heutzutage als Störung erlebt, als Herausforderung und als Infragestellung von gewohnten Routinen. Im vorliegenden Buch unternimmt Armin Nassehi den Versuch, die Digitaltechnik in der Struktur der modernen Gesellschaft selbst zu fundieren. Er entwickelt die These, dass bestimmte gesellschaftliche Regelmäßigkeiten, Strukturen und Muster das Material bilden, aus dem die Digitalisierung erst ihr ökonomisches, politisches und wissenschaftliches Kontroll- und Steuerungspotential schöpft. Infolge der Digitalisierung wird die Gesellschaft heute also regelrecht neu entdeckt.
    Content
    Einleitung Wie über Digitalisierung nachdenken? - Eine techniksoziologische Intuition - Frühe Technologieschübe - Original und Kopie - Produktive Fehlanzeige und Sollbruchstelle 1 Das Bezugsproblem der Digitalisierung Funktionalistische Fragen - Connecting Data - offline - Was ist das Problem? - Das Unbehagen an der digitalen Kultur - Die digitale Entdeckung der "Gesellschaft" - Empirische Sozialforschung als Mustererkennung - "Gesellschaft" als Digitalisierungsmaterial - Der / die / das Cyborg als Überwindung der Gesellschaft? 2 Der Eigensinn des Digitalen Die ungenaue Exaktheit der Welt - Der Eigensinn der Daten - Kybernetik und die Rückkopplung von Informationen - Digitalisierung der Kommunikation - Dynamik der Geschlossenheit - Die Selbstreferenz der Datenwelt 3 Multiple Verdoppelungen der Welt Daten als Beobachter - Verdoppelungen - Störungen - Querliegende datenförmige Verdoppelungen - Die Spur der Spur und diskrete Verdoppelungen - Spuren, Muster, Netze 4. Einfalt und Vielfalt Medium und Form - Codierung und Programmierung - Die digitale Einfachheit der Gesellschaft - Optionssteigerungen - Sapere aude im Spiegel der Digitalisierung Exkurs Digitaler Stoffwechsel
    5 Funktionierende Technik Die Funktion des Technischen - Digitale Technik - Kommunizierende Technik - Die Funktion des Funktionierens - Niedrigschwellige Technik - Dämonisierte Technik - Unsichtbare Technik und der Turing-Test - Das Privileg, Fehler zu machen 6 Lernende Technik Entscheidungen - Abduktive Maschinen? - Verteilte Intelligenz? - Anthropologische und technologische Fragen - Erlebende und handelnde Maschinen - Unvollständigkeit, Vorläufigkeit, systemische Paradoxien - Künstliche, leibliche, unvollständige Intelligenz 7 Das Internet als Massenmedium Sinnüberschussgeschäfte - Synchronisationsfunktion - Synchronisation und Sozialisation - Selektivität, Medialität und Voice im Netz - Beim Zuschauen zuschauen - Komplexität und Überhitzung - Das Netz als Archiv aller möglichen Sätze - Intelligenz im Modus des Futur 2.0 8 Gefährdete Privatheit Die Unwahrscheinlichkeit informationeller Selbstbestimmung - Ein neuer Strukturwandel der Öffentlichkeit? - Gefährdungen - Privatheit 1.0 - Privatheit 1.0 als Ergebnis von Big Data? - Big Data und die Privatheit 2.0 - Privatheit retten? 9 Debug: Die Wiedergeburt der Soziologie aus dem Geist der Digitalisierung Digitale Dynamik und gesellschaftliche Komplexität - Eine Chance für die Soziologie
    Footnote
    Vgl. den Besprechungsbeitrag von Hans-Christoph Hobohm u.d.T.: "Ein langer Brief an die Informationswissenschaft" in: Open Password, Nr. 642 vom 11.10.2019 [https://www.password-online.de/?email_id=798&user_id=1045]. Vgl. auch die Rez. unter: https://taz.de/Buch-zur-Soziologie-der-Gesellschaft/!5634072/. Vgl. dazu auch den Kommentar in Open Password, Nr.652 vom 29.10.2019.
  19. Thomi, M.: Überblick und Bewertung von Musiksuchmaschinen (2011) 0.02
    0.023311501 = product of:
      0.10490175 = sum of:
        0.035967726 = weight(_text_:technik in 3046) [ClassicSimilarity], result of:
          0.035967726 = score(doc=3046,freq=2.0), product of:
            0.109023005 = queryWeight, product of:
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.021906832 = queryNorm
            0.3299095 = fieldWeight in 3046, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.046875 = fieldNorm(doc=3046)
        0.0563137 = product of:
          0.1126274 = sum of:
            0.1126274 = weight(_text_:mustererkennung in 3046) [ClassicSimilarity], result of:
              0.1126274 = score(doc=3046,freq=2.0), product of:
                0.19292286 = queryWeight, product of:
                  8.806516 = idf(docFreq=17, maxDocs=44218)
                  0.021906832 = queryNorm
                0.583795 = fieldWeight in 3046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.806516 = idf(docFreq=17, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3046)
          0.5 = coord(1/2)
        0.0053741056 = weight(_text_:in in 3046) [ClassicSimilarity], result of:
          0.0053741056 = score(doc=3046,freq=8.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.18034597 = fieldWeight in 3046, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3046)
        0.00724622 = weight(_text_:der in 3046) [ClassicSimilarity], result of:
          0.00724622 = score(doc=3046,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.14807922 = fieldWeight in 3046, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=3046)
      0.22222222 = coord(4/18)
    
    Abstract
    Die wachsende Anzahl an Musik in Form von Audiodateien im Internet und deren Beliebtheit bei Internetnutzern auf der ganzen Welt erfordert praktikable Retrieval-Lösungen. Das Feld des Musik Information Retrievals (MIR) beinhaltet unter anderem die Erarbeitung von Musik Information Retrieval Systemen mit unterschiedlichen, teilweise multimedialen Lösungsansätzen. Die Funktionsweise von MIR-Systemen (= Musiksuchmaschinen), die textbasiert, und solchen, die mit Mustererkennung operieren, wird in dieser Arbeit erläutert. Des Weiteren werden im Sinne eines bewerteten State-of-the-Arts gratis zugängliche Musiksuchmaschinen im WWW betrachtet, die den Bereich Pop/Rock abdecken. Basierend auf diesem State-of-the-Art und auf Zweitbewertungen werden Empfehlungen in Form von Anforderungen an Musiksuchmaschinen formuliert und mögliche Zukunftsszeniaren aufgezeigt.
    Content
    Diese Publikation entstand im Rahmen einer Bachelor Thesis zum Abschluss Bachelor of Science (BSc) FHO in Informationswissenschaft. Vgl. unter: http://www.fh-htwchur.ch/uploads/media/CSI_45_Thomi.pdf.
    Imprint
    Chur : Hochschule für Technik und Wirtschaft / Arbeitsbereich Informationswissenschaft
  20. Franke-Maier, M.: Anforderungen an die Qualität der Inhaltserschließung im Spannungsfeld von intellektuell und automatisch erzeugten Metadaten (2018) 0.02
    0.022029156 = product of:
      0.0991312 = sum of:
        0.041962348 = weight(_text_:technik in 5344) [ClassicSimilarity], result of:
          0.041962348 = score(doc=5344,freq=2.0), product of:
            0.109023005 = queryWeight, product of:
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.021906832 = queryNorm
            0.38489443 = fieldWeight in 5344, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.976667 = idf(docFreq=828, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5344)
        0.005429798 = weight(_text_:in in 5344) [ClassicSimilarity], result of:
          0.005429798 = score(doc=5344,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1822149 = fieldWeight in 5344, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5344)
        0.023911307 = weight(_text_:der in 5344) [ClassicSimilarity], result of:
          0.023911307 = score(doc=5344,freq=16.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.4886365 = fieldWeight in 5344, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5344)
        0.027827747 = product of:
          0.04174162 = sum of:
            0.020965107 = weight(_text_:29 in 5344) [ClassicSimilarity], result of:
              0.020965107 = score(doc=5344,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.27205724 = fieldWeight in 5344, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5344)
            0.020776514 = weight(_text_:22 in 5344) [ClassicSimilarity], result of:
              0.020776514 = score(doc=5344,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.2708308 = fieldWeight in 5344, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5344)
          0.6666667 = coord(2/3)
      0.22222222 = coord(4/18)
    
    Abstract
    Spätestens seit dem Deutschen Bibliothekartag 2018 hat sich die Diskussion zu den automatischen Verfahren der Inhaltserschließung der Deutschen Nationalbibliothek von einer politisch geführten Diskussion in eine Qualitätsdiskussion verwandelt. Der folgende Beitrag beschäftigt sich mit Fragen der Qualität von Inhaltserschließung in digitalen Zeiten, wo heterogene Erzeugnisse unterschiedlicher Verfahren aufeinandertreffen und versucht, wichtige Anforderungen an Qualität zu definieren. Dieser Tagungsbeitrag fasst die vom Autor als Impulse vorgetragenen Ideen beim Workshop der FAG "Erschließung und Informationsvermittlung" des GBV am 29. August 2018 in Kiel zusammen. Der Workshop fand im Rahmen der 22. Verbundkonferenz des GBV statt.
    Source
    ABI-Technik. 38(2018) H.4, S.327-331

Languages

Types

  • a 5164
  • el 698
  • m 463
  • s 121
  • x 94
  • r 34
  • n 14
  • b 8
  • i 5
  • p 3
  • ag 2
  • v 2
  • l 1
  • ms 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications