Search (455 results, page 1 of 23)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.14
    0.13778776 = product of:
      0.41336328 = sum of:
        0.10334082 = product of:
          0.31002244 = sum of:
            0.31002244 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.31002244 = score(doc=1826,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.31002244 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.31002244 = score(doc=1826,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.33333334 = coord(2/6)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.11
    0.11023021 = product of:
      0.33069062 = sum of:
        0.082672656 = product of:
          0.24801797 = sum of:
            0.24801797 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.24801797 = score(doc=230,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.24801797 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.24801797 = score(doc=230,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.33333334 = coord(2/6)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.07
    0.06889388 = product of:
      0.20668164 = sum of:
        0.05167041 = product of:
          0.15501122 = sum of:
            0.15501122 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.15501122 = score(doc=4388,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
        0.15501122 = weight(_text_:2f in 4388) [ClassicSimilarity], result of:
          0.15501122 = score(doc=4388,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.33333334 = coord(2/6)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  4. Tozer, J.: How long is the perfect book? : Bigger really is better. What the numbers say (2019) 0.06
    0.060377758 = product of:
      0.18113327 = sum of:
        0.029834319 = product of:
          0.059668638 = sum of:
            0.059668638 = weight(_text_:theory in 4686) [ClassicSimilarity], result of:
              0.059668638 = score(doc=4686,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.36755344 = fieldWeight in 4686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4686)
          0.5 = coord(1/2)
        0.15129896 = weight(_text_:graphic in 4686) [ClassicSimilarity], result of:
          0.15129896 = score(doc=4686,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.5852823 = fieldWeight in 4686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0625 = fieldNorm(doc=4686)
      0.33333334 = coord(2/6)
    
    Abstract
    British novelist E.M. Forster once complained that long books "are usually overpraised" because "the reader wishes to convince others and himself that he has not wasted his time." To test his theory we collected reader ratings for 737 books tagged as "classic literature" on Goodreads.com, a review aggregator with 80m members. The bias towards chunky tomes was substantial. Slim volumes of 100 to 200 pages scored only 3.87 out of 5, whereas those over 1,000 pages scored 4.19. Longer is better, say the readers.
    Source
    https://www.1843magazine.com/data-graphic/what-the-numbers-say/how-long-is-the-perfect-book
  5. Facet analytical theory for managing knowledge structure in the humanities : FATKS (2003) 0.03
    0.034122285 = product of:
      0.2047337 = sum of:
        0.2047337 = sum of:
          0.119337276 = weight(_text_:theory in 2526) [ClassicSimilarity], result of:
            0.119337276 = score(doc=2526,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.7351069 = fieldWeight in 2526, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.125 = fieldNorm(doc=2526)
          0.08539642 = weight(_text_:29 in 2526) [ClassicSimilarity], result of:
            0.08539642 = score(doc=2526,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.6218451 = fieldWeight in 2526, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.125 = fieldNorm(doc=2526)
      0.16666667 = coord(1/6)
    
    Date
    29. 8.2004 9:17:18
  6. Spero, S.: Dashed suspicuous (2008) 0.03
    0.025216494 = product of:
      0.15129896 = sum of:
        0.15129896 = weight(_text_:graphic in 2626) [ClassicSimilarity], result of:
          0.15129896 = score(doc=2626,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.5852823 = fieldWeight in 2626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0625 = fieldNorm(doc=2626)
      0.16666667 = coord(1/6)
    
    Content
    "This is the latest version of the Doorbell -> Mammal graph; it shows the direct and indirect broader terms of doorbells in LCSH. This incarnation of the graphic adds one new piece of visual information that seems to be very very suggestive. Dashed lines are used to indicate broader term references that have never been validated since BT and NT references were automatically generated from the old SA (See Also) links in 1988."
  7. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.02
    0.021671202 = product of:
      0.0650136 = sum of:
        0.05167041 = product of:
          0.15501122 = sum of:
            0.15501122 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.15501122 = score(doc=5669,freq=2.0), product of:
                0.3309742 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03903913 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.01334319 = product of:
          0.02668638 = sum of:
            0.02668638 = weight(_text_:29 in 5669) [ClassicSimilarity], result of:
              0.02668638 = score(doc=5669,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.19432661 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  8. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.02
    0.021557001 = product of:
      0.064671 = sum of:
        0.05673711 = weight(_text_:graphic in 3035) [ClassicSimilarity], result of:
          0.05673711 = score(doc=3035,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.21948087 = fieldWeight in 3035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3035)
        0.007933895 = product of:
          0.01586779 = sum of:
            0.01586779 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
              0.01586779 = score(doc=3035,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.116070345 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3035)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
  9. Broughton, V.: Facet analysis as a fundamental theory for structuring subject organization tools (2007) 0.02
    0.021180402 = product of:
      0.12708241 = sum of:
        0.12708241 = sum of:
          0.084384196 = weight(_text_:theory in 537) [ClassicSimilarity], result of:
            0.084384196 = score(doc=537,freq=4.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.51979905 = fieldWeight in 537, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.0625 = fieldNorm(doc=537)
          0.04269821 = weight(_text_:29 in 537) [ClassicSimilarity], result of:
            0.04269821 = score(doc=537,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.31092256 = fieldWeight in 537, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0625 = fieldNorm(doc=537)
      0.16666667 = coord(1/6)
    
    Abstract
    The presentation will examine the potential of facet analysis as a basis for determining status and relationships of concepts in subject based tools using a controlled vocabulary, and the extent to which it can be used as a general theory of knowledge organization as opposed to a methodology for structuring classifications only.
    Date
    26.12.2011 13:21:29
  10. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.02
    0.021116383 = product of:
      0.06334915 = sum of:
        0.042192098 = product of:
          0.084384196 = sum of:
            0.084384196 = weight(_text_:theory in 1149) [ClassicSimilarity], result of:
              0.084384196 = score(doc=1149,freq=4.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.51979905 = fieldWeight in 1149, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
        0.021157054 = product of:
          0.04231411 = sum of:
            0.04231411 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.04231411 = score(doc=1149,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
  11. Fowler, R.H.; Wilson, B.A.; Fowler, W.A.L.: Information navigator : an information system using associative networks for display and retrieval (1992) 0.02
    0.018912371 = product of:
      0.11347422 = sum of:
        0.11347422 = weight(_text_:graphic in 919) [ClassicSimilarity], result of:
          0.11347422 = score(doc=919,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.43896174 = fieldWeight in 919, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=919)
      0.16666667 = coord(1/6)
    
    Abstract
    Document retrieval is a highly interactive process dealing with large amounts of information. Visual representations can provide both a means for managing the complexity of large information structures and an interface style well suited to interactive manipulation. The system we have designed utilizes visually displayed graphic structures and a direct manipulation interface style to supply an integrated environment for retrieval. A common visually displayed network structure is used for query, document content, and term relations. A query can be modified through direct manipulation of its visual form by incorporating terms from any other information structure the system displays. An associative thesaurus of terms and an inter-document network provide information about a document collection that can complement other retrieval aids. Visualization of these large data structures makes use of fisheye views and overview diagrams to help overcome some of the inherent difficulties of orientation and navigation in large information structures.
  12. Bartczak, J.; Glendon, I.: Python, Google Sheets, and the Thesaurus for Graphic Materials for efficient metadata project workflows (2017) 0.02
    0.018912371 = product of:
      0.11347422 = sum of:
        0.11347422 = weight(_text_:graphic in 3893) [ClassicSimilarity], result of:
          0.11347422 = score(doc=3893,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.43896174 = fieldWeight in 3893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=3893)
      0.16666667 = coord(1/6)
    
  13. Gödert, W.: Knowledge organization and information retrieval in times of change : concepts for education in Germany (2001) 0.02
    0.016835473 = product of:
      0.05050642 = sum of:
        0.026105028 = product of:
          0.052210055 = sum of:
            0.052210055 = weight(_text_:theory in 3413) [ClassicSimilarity], result of:
              0.052210055 = score(doc=3413,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.32160926 = fieldWeight in 3413, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3413)
          0.5 = coord(1/2)
        0.024401393 = product of:
          0.048802786 = sum of:
            0.048802786 = weight(_text_:methods in 3413) [ClassicSimilarity], result of:
              0.048802786 = score(doc=3413,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.31093797 = fieldWeight in 3413, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3413)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    A survey is given, how modifications in the field of the information processing and technology have influenced the concepts for teaching and studying the subjects of knowledge organization and information retrieval in German universities for library and information science. The discussion will distinguish between fields of modifications and fields of stability. The fields of the modifications are characterised by procedures and applications in libraries. The fields of stability are characterised by theory and methods
  14. Franke, F.: ¬Das Framework for Information Literacy : neue Impulse für die Förderung von Informationskompetenz in Deutschland?! (2017) 0.01
    0.014533697 = product of:
      0.04360109 = sum of:
        0.0277333 = product of:
          0.0554666 = sum of:
            0.0554666 = weight(_text_:29 in 2248) [ClassicSimilarity], result of:
              0.0554666 = score(doc=2248,freq=6.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.40390027 = fieldWeight in 2248, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2248)
          0.5 = coord(1/2)
        0.01586779 = product of:
          0.03173558 = sum of:
            0.03173558 = weight(_text_:22 in 2248) [ClassicSimilarity], result of:
              0.03173558 = score(doc=2248,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23214069 = fieldWeight in 2248, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2248)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    https://www.o-bib.de/article/view/2017H4S22-29. DOI: https://doi.org/10.5282/o-bib/2017H4S22-29.
    Source
    o-bib: Das offene Bibliotheksjournal. 4(2017) Nr.4, S.22-29
  15. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.01
    0.014304606 = product of:
      0.085827634 = sum of:
        0.085827634 = sum of:
          0.048802786 = weight(_text_:methods in 540) [ClassicSimilarity], result of:
            0.048802786 = score(doc=540,freq=2.0), product of:
              0.15695344 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.03903913 = queryNorm
              0.31093797 = fieldWeight in 540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.0546875 = fieldNorm(doc=540)
          0.037024844 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
            0.037024844 = score(doc=540,freq=2.0), product of:
              0.1367084 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03903913 = queryNorm
              0.2708308 = fieldWeight in 540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=540)
      0.16666667 = coord(1/6)
    
    Abstract
    With the increasing requirement of establishing semantic mappings between different vocabularies, further development of these encoding formats is becoming more and more important. For this reason, four types of knowledge representation formats were assessed:MARC21 for Classification Data in XML, Zthes XML Schema, XTM(XML Topic Map), and SKOS (Simple Knowledge Organisation System). This paper explores the potential of adapting these representation formats to support different semantic mapping methods, and discusses the implication of extending them to represent more complex KOS.
    Date
    26.12.2011 13:22:27
  16. Cohen, D.J.: From Babel to knowledge : data mining large digital collections (2006) 0.01
    0.014268155 = product of:
      0.042804465 = sum of:
        0.014917159 = product of:
          0.029834319 = sum of:
            0.029834319 = weight(_text_:theory in 1178) [ClassicSimilarity], result of:
              0.029834319 = score(doc=1178,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.18377672 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
        0.027887305 = product of:
          0.05577461 = sum of:
            0.05577461 = weight(_text_:methods in 1178) [ClassicSimilarity], result of:
              0.05577461 = score(doc=1178,freq=8.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.35535768 = fieldWeight in 1178, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    In Jorge Luis Borges's curious short story The Library of Babel, the narrator describes an endless collection of books stored from floor to ceiling in a labyrinth of countless hexagonal rooms. The pages of the library's books seem to contain random sequences of letters and spaces; occasionally a few intelligible words emerge in the sea of paper and ink. Nevertheless, readers diligently, and exasperatingly, scan the shelves for coherent passages. The narrator himself has wandered numerous rooms in search of enlightenment, but with resignation he simply awaits his death and burial - which Borges explains (with signature dark humor) consists of being tossed unceremoniously over the library's banister. Borges's nightmare, of course, is a cursed vision of the research methods of disciplines such as literature, history, and philosophy, where the careful reading of books, one after the other, is supposed to lead inexorably to knowledge and understanding. Computer scientists would approach Borges's library far differently. Employing the information theory that forms the basis for search engines and other computerized techniques for assessing in one fell swoop large masses of documents, they would quickly realize the collection's incoherence though sampling and statistical methods - and wisely start looking for the library's exit. These computational methods, which allow us to find patterns, determine relationships, categorize documents, and extract information from massive corpuses, will form the basis for new tools for research in the humanities and other disciplines in the coming decade. For the past three years I have been experimenting with how to provide such end-user tools - that is, tools that harness the power of vast electronic collections while hiding much of their complicated technical plumbing. In particular, I have made extensive use of the application programming interfaces (APIs) the leading search engines provide for programmers to query their databases directly (from server to server without using their web interfaces). In addition, I have explored how one might extract information from large digital collections, from the well-curated lexicographic database WordNet to the democratic (and poorly curated) online reference work Wikipedia. While processing these digital corpuses is currently an imperfect science, even now useful tools can be created by combining various collections and methods for searching and analyzing them. And more importantly, these nascent services suggest a future in which information can be gleaned from, and sense can be made out of, even imperfect digital libraries of enormous scale. A brief examination of two approaches to data mining large digital collections hints at this future, while also providing some lessons about how to get there.
  17. Hartmann, S.; Haffner, A.: Linked-RDA-Data in der Praxis (2010) 0.01
    0.01416872 = product of:
      0.04250616 = sum of:
        0.021349104 = product of:
          0.04269821 = sum of:
            0.04269821 = weight(_text_:29 in 1679) [ClassicSimilarity], result of:
              0.04269821 = score(doc=1679,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.31092256 = fieldWeight in 1679, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1679)
          0.5 = coord(1/2)
        0.021157054 = product of:
          0.04231411 = sum of:
            0.04231411 = weight(_text_:22 in 1679) [ClassicSimilarity], result of:
              0.04231411 = score(doc=1679,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.30952093 = fieldWeight in 1679, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1679)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    Vortrag, anlässlich der SWIB 2010, 29./30.11.2010 in Köln.
    Date
    13. 2.2011 20:22:23
  18. Landwehr, A.: China schafft digitales Punktesystem für den "besseren" Menschen (2018) 0.01
    0.01416872 = product of:
      0.04250616 = sum of:
        0.021349104 = product of:
          0.04269821 = sum of:
            0.04269821 = weight(_text_:29 in 4314) [ClassicSimilarity], result of:
              0.04269821 = score(doc=4314,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.31092256 = fieldWeight in 4314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4314)
          0.5 = coord(1/2)
        0.021157054 = product of:
          0.04231411 = sum of:
            0.04231411 = weight(_text_:22 in 4314) [ClassicSimilarity], result of:
              0.04231411 = score(doc=4314,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.30952093 = fieldWeight in 4314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4314)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 6.2018 14:29:46
  19. Hajdu Barat, A.: Multilevel education, training, traditions and research in Hungary (2007) 0.01
    0.012795856 = product of:
      0.07677513 = sum of:
        0.07677513 = sum of:
          0.044751476 = weight(_text_:theory in 545) [ClassicSimilarity], result of:
            0.044751476 = score(doc=545,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.27566507 = fieldWeight in 545, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.046875 = fieldNorm(doc=545)
          0.032023653 = weight(_text_:29 in 545) [ClassicSimilarity], result of:
            0.032023653 = score(doc=545,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.23319192 = fieldWeight in 545, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=545)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper aims to explore the theory and practice of education in schools and the further education as two levels of the Information Society in Hungary . The LIS education is considered the third level over previous levels. I attempt to survey the curriculum and content of different subjects in school; and the division of the programme for librarians. There is a great and long history of UDC usage in Hungary. The lecture sketches stairs of tradition from the beginning to the situation nowadays. Szab ó Ervin began to train the UDC at the Municipal Library in Budapest from 1910. He not only used, but taught the UDC for librarians in his courses, too. As a consequence of Szab ó Ervin's activity the librarians knew and used the UDC very early, and all libraries would use it. The article gives a short overview of recent developments and duties, the situation after the new Hungarian edition, the UDC usage in Hungarian OPAC and the possibility of UDC visualization.
    Source
    Extensions and corrections to the UDC. 29(2007), S.273-284
  20. Naudet, Y.; Latour, T.; Chen, D.: ¬A Systemic approach to Interoperability formalization (2009) 0.01
    0.012795856 = product of:
      0.07677513 = sum of:
        0.07677513 = sum of:
          0.044751476 = weight(_text_:theory in 2740) [ClassicSimilarity], result of:
            0.044751476 = score(doc=2740,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.27566507 = fieldWeight in 2740, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.046875 = fieldNorm(doc=2740)
          0.032023653 = weight(_text_:29 in 2740) [ClassicSimilarity], result of:
            0.032023653 = score(doc=2740,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.23319192 = fieldWeight in 2740, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=2740)
      0.16666667 = coord(1/6)
    
    Abstract
    With a first version developed last year, the Ontology of Interoperability (OoI) aims at formally describing concepts relating to problems and solutions in the domain of interoperability. From the beginning, the OoI has its foundations in the systemic theory and addresses interoperability from the general point of view of a system, whether it is composed by other systems (systems-of-systems) or not. In this paper, we present the last OoI focusing on the systemic approach. We then integrate a classification of interoperability knowledge provided by the Framework for Enterprise Interoperability. This way, we contextualize the OoI with a specific vocabulary to the enterprise domain, where solutions to interoperability problems are characterized according to interoperability approaches defined in the ISO 14258 and both solutions and problems can be localized into enterprises levels and characterized by interoperability levels, as defined in the European Interoperability Framework.
    Date
    29. 1.2016 18:48:14

Years

Languages

  • e 283
  • d 159
  • el 3
  • a 2
  • i 2
  • nl 1
  • sp 1
  • More… Less…

Types

  • a 236
  • i 20
  • s 13
  • r 7
  • m 6
  • x 6
  • p 4
  • b 3
  • n 1
  • More… Less…

Themes