Search (360 results, page 1 of 18)

  • × type_ss:"el"
  1. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.07
    0.07020888 = sum of:
      0.051830415 = product of:
        0.20732166 = sum of:
          0.20732166 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
            0.20732166 = score(doc=5669,freq=2.0), product of:
              0.4426655 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052213363 = queryNorm
              0.46834838 = fieldWeight in 5669, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5669)
        0.25 = coord(1/4)
      0.01837846 = product of:
        0.03675692 = sum of:
          0.03675692 = weight(_text_:k in 5669) [ClassicSimilarity], result of:
            0.03675692 = score(doc=5669,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.19720423 = fieldWeight in 5669, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5669)
        0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  2. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2005) 0.06
    0.06114716 = product of:
      0.12229432 = sum of:
        0.12229432 = sum of:
          0.07277499 = weight(_text_:k in 4324) [ClassicSimilarity], result of:
            0.07277499 = score(doc=4324,freq=4.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.39044446 = fieldWeight in 4324, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
          0.049519327 = weight(_text_:22 in 4324) [ClassicSimilarity], result of:
            0.049519327 = score(doc=4324,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.2708308 = fieldWeight in 4324, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4324)
      0.5 = coord(1/2)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:25
    Object
    K-Infinity
  3. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.06
    0.057702295 = product of:
      0.11540459 = sum of:
        0.11540459 = sum of:
          0.058811072 = weight(_text_:k in 4331) [ClassicSimilarity], result of:
            0.058811072 = score(doc=4331,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.31552678 = fieldWeight in 4331, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0625 = fieldNorm(doc=4331)
          0.05659352 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
            0.05659352 = score(doc=4331,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.30952093 = fieldWeight in 4331, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4331)
      0.5 = coord(1/2)
    
    Date
    15. 3.2011 19:21:22
  4. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.06
    0.055446193 = sum of:
      0.025432948 = product of:
        0.10173179 = sum of:
          0.10173179 = weight(_text_:authors in 1967) [ClassicSimilarity], result of:
            0.10173179 = score(doc=1967,freq=4.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.42738882 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
        0.25 = coord(1/4)
      0.030013246 = product of:
        0.060026493 = sum of:
          0.060026493 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
            0.060026493 = score(doc=1967,freq=4.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.32829654 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
        0.5 = coord(1/2)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
  5. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.05
    0.0544424 = product of:
      0.1088848 = sum of:
        0.1088848 = sum of:
          0.07351384 = weight(_text_:k in 2564) [ClassicSimilarity], result of:
            0.07351384 = score(doc=2564,freq=8.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.39440846 = fieldWeight in 2564, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2564)
          0.03537095 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
            0.03537095 = score(doc=2564,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.19345059 = fieldWeight in 2564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2564)
      0.5 = coord(1/2)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
  6. Lindholm, J.; Schönthal, T.; Jansson , K.: Experiences of harvesting Web resources in engineering using automatic classification (2003) 0.05
    0.053383946 = sum of:
      0.023978412 = product of:
        0.09591365 = sum of:
          0.09591365 = weight(_text_:authors in 4088) [ClassicSimilarity], result of:
            0.09591365 = score(doc=4088,freq=2.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.40294603 = fieldWeight in 4088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0625 = fieldNorm(doc=4088)
        0.25 = coord(1/4)
      0.029405536 = product of:
        0.058811072 = sum of:
          0.058811072 = weight(_text_:k in 4088) [ClassicSimilarity], result of:
            0.058811072 = score(doc=4088,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.31552678 = fieldWeight in 4088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0625 = fieldNorm(doc=4088)
        0.5 = coord(1/2)
    
    Abstract
    Authors describe the background and the work involved in setting up Engine-e, a Web index that uses automatic classification as a mean for the selection of resources in Engineering. Considerations in offering a robot-generated Web index as a successor to a manually indexed quality-controlled subject gateway are also discussed
  7. Zanibbi, R.; Yuan, B.: Keyword and image-based retrieval for mathematical expressions (2011) 0.05
    0.052411847 = product of:
      0.10482369 = sum of:
        0.10482369 = sum of:
          0.06237856 = weight(_text_:k in 3449) [ClassicSimilarity], result of:
            0.06237856 = score(doc=3449,freq=4.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.33466667 = fieldWeight in 3449, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
          0.042445138 = weight(_text_:22 in 3449) [ClassicSimilarity], result of:
            0.042445138 = score(doc=3449,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.23214069 = fieldWeight in 3449, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3449)
      0.5 = coord(1/2)
    
    Abstract
    Two new methods for retrieving mathematical expressions using conventional keyword search and expression images are presented. An expression-level TF-IDF (term frequency-inverse document frequency) approach is used for keyword search, where queries and indexed expressions are represented by keywords taken from LATEX strings. TF-IDF is computed at the level of individual expressions rather than documents to increase the precision of matching. The second retrieval technique is a form of Content-Base Image Retrieval (CBIR). Expressions are segmented into connected components, and then components in the query expression and each expression in the collection are matched using contour and density features, aspect ratios, and relative positions. In an experiment using ten randomly sampled queries from a corpus of over 22,000 expressions, precision-at-k (k= 20) for the keyword-based approach was higher (keyword: µ= 84.0,s= 19.0, image-based:µ= 32.0,s= 30.7), but for a few of the queries better results were obtained using a combination of the two techniques.
    Date
    22. 2.2017 12:53:49
  8. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.05
    0.051830415 = product of:
      0.10366083 = sum of:
        0.10366083 = product of:
          0.41464332 = sum of:
            0.41464332 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.41464332 = score(doc=1826,freq=2.0), product of:
                0.4426655 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052213363 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  9. Golub, K.; Moon, J.; Nielsen, M.L.; Tudhope, D.: EnTag: Enhanced Tagging for Discovery (2008) 0.05
    0.046710957 = sum of:
      0.02098111 = product of:
        0.08392444 = sum of:
          0.08392444 = weight(_text_:authors in 2294) [ClassicSimilarity], result of:
            0.08392444 = score(doc=2294,freq=2.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.35257778 = fieldWeight in 2294, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2294)
        0.25 = coord(1/4)
      0.025729846 = product of:
        0.051459692 = sum of:
          0.051459692 = weight(_text_:k in 2294) [ClassicSimilarity], result of:
            0.051459692 = score(doc=2294,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.27608594 = fieldWeight in 2294, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2294)
        0.5 = coord(1/2)
    
    Abstract
    Purpose: Investigate the combination of controlled and folksonomy approaches to support resource discovery in repositories and digital collections. Aim: Investigate whether use of an established controlled vocabulary can help improve social tagging for better resource discovery. Objectives: (1) Investigate indexing aspects when using only social tagging versus when using social tagging with suggestions from a controlled vocabulary; (2) Investigate above in two different contexts: tagging by readers and tagging by authors; (3) Investigate influence of only social tagging versus social tagging with a controlled vocabulary on retrieval. - Vgl.: http://www.ukoln.ac.uk/projects/enhanced-tagging/.
  10. Bärnreuther, K.: Informationskompetenz-Vermittlung für Schulklassen mit Wikipedia und dem Framework Informationskompetenz in der Hochschulbildung (2021) 0.04
    0.04327672 = product of:
      0.08655344 = sum of:
        0.08655344 = sum of:
          0.044108305 = weight(_text_:k in 299) [ClassicSimilarity], result of:
            0.044108305 = score(doc=299,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.23664509 = fieldWeight in 299, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=299)
          0.042445138 = weight(_text_:22 in 299) [ClassicSimilarity], result of:
            0.042445138 = score(doc=299,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.23214069 = fieldWeight in 299, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=299)
      0.5 = coord(1/2)
    
    Source
    o-bib: Das offene Bibliotheksjournal. 8(2021) Nr.2, S.1-22
  11. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.04
    0.041464332 = product of:
      0.082928665 = sum of:
        0.082928665 = product of:
          0.33171466 = sum of:
            0.33171466 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.33171466 = score(doc=230,freq=2.0), product of:
                0.4426655 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052213363 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  12. Severiens, T.; Hohlfeld, M.; Zimmermann, K.; Hilf, E.R.: PhysDoc - a distributed network of physics institutions documents : collecting, indexing, and searching high quality documents by using harvest (2000) 0.04
    0.039572585 = sum of:
      0.021194125 = product of:
        0.0847765 = sum of:
          0.0847765 = weight(_text_:authors in 6470) [ClassicSimilarity], result of:
            0.0847765 = score(doc=6470,freq=4.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.35615736 = fieldWeight in 6470, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=6470)
        0.25 = coord(1/4)
      0.01837846 = product of:
        0.03675692 = sum of:
          0.03675692 = weight(_text_:k in 6470) [ClassicSimilarity], result of:
            0.03675692 = score(doc=6470,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.19720423 = fieldWeight in 6470, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0390625 = fieldNorm(doc=6470)
        0.5 = coord(1/2)
    
    Abstract
    PhysNet offers online services that enable a physicist to keep in touch with the worldwide physics community and to receive all information he or she may need. In addition to being of great value to physicists, these services are practical examples of the use of modern methods of digital libraries, in particular the use of metadata harvesting. One service is PhysDoc. This consists of a Harvest-based online information broker- and gatherer-network, which harvests information from the local web-servers of professional physics institutions worldwide (mostly in Europe and USA so far). PhysDoc focuses on scientific information posted by the individual scientist at his local server, such as documents, publications, reports, publication lists, and lists of links to documents. All rights are reserved for the authors who are responsible for the content and quality of their documents. PhysDis is an analogous service but specifically for university theses, with their dual requirements of examination work and publication. The strategy is to select high quality sites containing metadata. We report here on the present status of PhysNet, our experience in operating it, and the development of its usage. To continuously involve authors, research groups, and national societies is considered crucial for a future stable service.
  13. Coyle, K.; Hillmann, D.: Resource Description and Access (RDA) : cataloging rules for the 20th century (2007) 0.03
    0.03336497 = sum of:
      0.0149865085 = product of:
        0.059946034 = sum of:
          0.059946034 = weight(_text_:authors in 2525) [ClassicSimilarity], result of:
            0.059946034 = score(doc=2525,freq=2.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.25184128 = fieldWeight in 2525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2525)
        0.25 = coord(1/4)
      0.01837846 = product of:
        0.03675692 = sum of:
          0.03675692 = weight(_text_:k in 2525) [ClassicSimilarity], result of:
            0.03675692 = score(doc=2525,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.19720423 = fieldWeight in 2525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2525)
        0.5 = coord(1/2)
    
    Abstract
    There is evidence that many individuals and organizations in the library world do not support the work taking place to develop a next generation of the library cataloging rules. The authors describe the tensions existing between those advocating an incremental change to cataloging process and others who desire a bolder library entry into the digital era. Libraries have lost their place as primary information providers, surpassed by more agile (and in many cases wealthier) purveyors of digital information delivery services. Although libraries still manage materials that are not available elsewhere, the library's approach to user service and the user interface is not competing successfully against services like Amazon or Google. If libraries are to avoid further marginalization, they need to make a fundamental change in their approach to user services. The library's signature service, its catalog, uses rules for cataloging that are remnants of a long departed technology: the card catalog. Modifications to the rules, such as those proposed by the Resource Description and Access (RDA) development effort, can only keep us rooted firmly in the 20th, if not the 19th century. A more radical change is required that will contribute to the library of the future, re-imagined and integrated with the chosen workflow of its users.
  14. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.03
    0.03336497 = sum of:
      0.0149865085 = product of:
        0.059946034 = sum of:
          0.059946034 = weight(_text_:authors in 3373) [ClassicSimilarity], result of:
            0.059946034 = score(doc=3373,freq=2.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.25184128 = fieldWeight in 3373, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3373)
        0.25 = coord(1/4)
      0.01837846 = product of:
        0.03675692 = sum of:
          0.03675692 = weight(_text_:k in 3373) [ClassicSimilarity], result of:
            0.03675692 = score(doc=3373,freq=2.0), product of:
              0.18639012 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.052213363 = queryNorm
              0.19720423 = fieldWeight in 3373, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3373)
        0.5 = coord(1/2)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
  15. Open MIND (2015) 0.03
    0.032671984 = sum of:
      0.0149865085 = product of:
        0.059946034 = sum of:
          0.059946034 = weight(_text_:authors in 1648) [ClassicSimilarity], result of:
            0.059946034 = score(doc=1648,freq=2.0), product of:
              0.23803101 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052213363 = queryNorm
              0.25184128 = fieldWeight in 1648, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1648)
        0.25 = coord(1/4)
      0.017685475 = product of:
        0.03537095 = sum of:
          0.03537095 = weight(_text_:22 in 1648) [ClassicSimilarity], result of:
            0.03537095 = score(doc=1648,freq=2.0), product of:
              0.1828423 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052213363 = queryNorm
              0.19345059 = fieldWeight in 1648, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1648)
        0.5 = coord(1/2)
    
    Abstract
    This is an edited collection of 39 original papers and as many commentaries and replies. The target papers and replies were written by senior members of the MIND Group, while all commentaries were written by junior group members. All papers and commentaries have undergone a rigorous process of anonymous peer review, during which the junior members of the MIND Group acted as reviewers. The final versions of all the target articles, commentaries and replies have undergone additional editorial review. Besides offering a cross-section of ongoing, cutting-edge research in philosophy and cognitive science, this collection is also intended to be a free electronic resource for teaching. It therefore also contains a selection of online supporting materials, pointers to video and audio files and to additional free material supplied by the 92 authors represented in this volume. We will add more multimedia material, a searchable literature database, and tools to work with the online version in the future. All contributions to this collection are strictly open access. They can be downloaded, printed, and reproduced by anyone.
    Date
    27. 1.2015 11:48:22
  16. Sparck Jones, K.: Summary performance comparisons TREC-2 through TREC-8 (2001) 0.03
    0.029405536 = product of:
      0.058811072 = sum of:
        0.058811072 = product of:
          0.117622145 = sum of:
            0.117622145 = weight(_text_:k in 663) [ClassicSimilarity], result of:
              0.117622145 = score(doc=663,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.63105357 = fieldWeight in 663, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.125 = fieldNorm(doc=663)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Hegna, K.: Using FRBR (2004) 0.03
    0.029405536 = product of:
      0.058811072 = sum of:
        0.058811072 = product of:
          0.117622145 = sum of:
            0.117622145 = weight(_text_:k in 3759) [ClassicSimilarity], result of:
              0.117622145 = score(doc=3759,freq=2.0), product of:
                0.18639012 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.052213363 = queryNorm
                0.63105357 = fieldWeight in 3759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.125 = fieldNorm(doc=3759)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Information als Rohstoff für Innovation : Programm der Bundesregierung 1996-2000 (1996) 0.03
    0.02829676 = product of:
      0.05659352 = sum of:
        0.05659352 = product of:
          0.11318704 = sum of:
            0.11318704 = weight(_text_:22 in 5449) [ClassicSimilarity], result of:
              0.11318704 = score(doc=5449,freq=2.0), product of:
                0.1828423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052213363 = queryNorm
                0.61904186 = fieldWeight in 5449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5449)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.1997 19:26:34
  19. Ask me[@sk.me]: your global information guide : der Wegweiser durch die Informationswelten (1996) 0.03
    0.02829676 = product of:
      0.05659352 = sum of:
        0.05659352 = product of:
          0.11318704 = sum of:
            0.11318704 = weight(_text_:22 in 5837) [ClassicSimilarity], result of:
              0.11318704 = score(doc=5837,freq=2.0), product of:
                0.1828423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052213363 = queryNorm
                0.61904186 = fieldWeight in 5837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5837)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30.11.1996 13:22:37
  20. Kosmos Weltatlas 2000 : Der Kompass für das 21. Jahrhundert. Inklusive Welt-Routenplaner (1999) 0.03
    0.02829676 = product of:
      0.05659352 = sum of:
        0.05659352 = product of:
          0.11318704 = sum of:
            0.11318704 = weight(_text_:22 in 4085) [ClassicSimilarity], result of:
              0.11318704 = score(doc=4085,freq=2.0), product of:
                0.1828423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052213363 = queryNorm
                0.61904186 = fieldWeight in 4085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4085)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7.11.1999 18:22:39

Years

Languages

  • e 183
  • d 163
  • a 3
  • el 2
  • f 1
  • nl 1
  • More… Less…

Types

  • a 184
  • i 13
  • m 8
  • r 6
  • s 6
  • x 4
  • b 3
  • n 2
  • More… Less…

Themes