Search (363 results, page 1 of 19)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.0689421 = product of:
      0.1378842 = sum of:
        0.1378842 = product of:
          0.41365257 = sum of:
            0.41365257 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.41365257 = score(doc=1826,freq=2.0), product of:
                0.44160777 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052088603 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.06
    0.05515368 = product of:
      0.11030736 = sum of:
        0.11030736 = product of:
          0.33092207 = sum of:
            0.33092207 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.33092207 = score(doc=230,freq=2.0), product of:
                0.44160777 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052088603 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Understanding metadata (2004) 0.05
    0.053071506 = product of:
      0.10614301 = sum of:
        0.10614301 = sum of:
          0.049684722 = weight(_text_:libraries in 2686) [ClassicSimilarity], result of:
            0.049684722 = score(doc=2686,freq=2.0), product of:
              0.1711139 = queryWeight, product of:
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.052088603 = queryNorm
              0.29036054 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
          0.056458294 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
            0.056458294 = score(doc=2686,freq=2.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.30952093 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
      0.5 = coord(1/2)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  4. Haslhofer, B.: Uniform SPARQL access to interlinked (digital library) sources (2007) 0.05
    0.053071506 = product of:
      0.10614301 = sum of:
        0.10614301 = sum of:
          0.049684722 = weight(_text_:libraries in 541) [ClassicSimilarity], result of:
            0.049684722 = score(doc=541,freq=2.0), product of:
              0.1711139 = queryWeight, product of:
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.052088603 = queryNorm
              0.29036054 = fieldWeight in 541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.0625 = fieldNorm(doc=541)
          0.056458294 = weight(_text_:22 in 541) [ClassicSimilarity], result of:
            0.056458294 = score(doc=541,freq=2.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.30952093 = fieldWeight in 541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=541)
      0.5 = coord(1/2)
    
    Abstract
    In this presentation, we therefore focus on a solution for providing uniform access to Digital Libraries and other online services. In order to enable uniform query access to heterogeneous sources, we must provide metadata interoperability in a way that a query language - in this case SPARQL - can cope with the incompatibility of the metadata in various sources without changing their already existing information models.
    Date
    26.12.2011 13:22:46
  5. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.05
    0.048573304 = product of:
      0.09714661 = sum of:
        0.09714661 = sum of:
          0.037263542 = weight(_text_:libraries in 1967) [ClassicSimilarity], result of:
            0.037263542 = score(doc=1967,freq=2.0), product of:
              0.1711139 = queryWeight, product of:
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.052088603 = queryNorm
              0.2177704 = fieldWeight in 1967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
          0.059883066 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
            0.059883066 = score(doc=1967,freq=4.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.32829654 = fieldWeight in 1967, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1967)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  6. Automatic classification research at OCLC (2002) 0.05
    0.04643757 = product of:
      0.09287514 = sum of:
        0.09287514 = sum of:
          0.043474134 = weight(_text_:libraries in 1563) [ClassicSimilarity], result of:
            0.043474134 = score(doc=1563,freq=2.0), product of:
              0.1711139 = queryWeight, product of:
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.052088603 = queryNorm
              0.25406548 = fieldWeight in 1563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1563)
          0.049401004 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
            0.049401004 = score(doc=1563,freq=2.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.2708308 = fieldWeight in 1563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1563)
      0.5 = coord(1/2)
    
    Abstract
    OCLC enlists the cooperation of the world's libraries to make the written record of humankind's cultural heritage more accessible through electronic media. Part of this goal can be accomplished through the application of the principles of knowledge organization. We believe that cultural artifacts are effectively lost unless they are indexed, cataloged and classified. Accordingly, OCLC has developed products, sponsored research projects, and encouraged the participation in international standards communities whose outcome has been improved library classification schemes, cataloging productivity tools, and new proposals for the creation and maintenance of metadata. Though cataloging and classification requires expert intellectual effort, we recognize that at least some of the work must be automated if we hope to keep pace with cultural change
    Date
    5. 5.2003 9:22:09
  7. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.04
    0.042857103 = product of:
      0.085714206 = sum of:
        0.085714206 = sum of:
          0.064542346 = weight(_text_:libraries in 1184) [ClassicSimilarity], result of:
            0.064542346 = score(doc=1184,freq=24.0), product of:
              0.1711139 = queryWeight, product of:
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.052088603 = queryNorm
              0.3771894 = fieldWeight in 1184, product of:
                4.8989797 = tf(freq=24.0), with freq of:
                  24.0 = termFreq=24.0
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
          0.021171859 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.021171859 = score(doc=1184,freq=2.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.116070345 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
      0.5 = coord(1/2)
    
    Abstract
    Google's December 2004 announcement of its intention to collaborate with five major research libraries - Harvard University, the University of Michigan, Stanford University, the University of Oxford, and the New York Public Library - to digitize and surface their print book collections in the Google searching universe has, predictably, stirred conflicting opinion, with some viewing the project as a welcome opportunity to enhance the visibility of library collections in new environments, and others wary of Google's prospective role as gateway to these collections. The project has been vigorously debated on discussion lists and blogs, with the participating libraries commonly referred to as "the Google 5". One point most observers seem to concede is that the questions raised by this initiative are both timely and significant. The Google Print Library Project (GPLP) has galvanized a long overdue, multi-faceted discussion about library print book collections. The print book is core to library identity and practice, but in an era of zero-sum budgeting, it is almost inevitable that print book budgets will decline as budgets for serials, digital resources, and other materials expand. As libraries re-allocate resources to accommodate changing patterns of user needs, print book budgets may be adversely impacted. Of course, the degree of impact will depend on a library's perceived mission. A public library may expect books to justify their shelf-space, with de-accession the consequence of minimal use. A national library, on the other hand, has a responsibility to the scholarly and cultural record and may seek to collect comprehensively within particular areas, with the attendant obligation to secure the long-term retention of its print book collections. The combination of limited budgets, changing user needs, and differences in library collection strategies underscores the need to think about a collective, or system-wide, print book collection - in particular, how can an inter-institutional system be organized to achieve goals that would be difficult, and/or prohibitively expensive, for any one library to undertake individually [4]? Mass digitization programs like GPLP cast new light on these and other issues surrounding the future of library print book collections, but at this early stage, it is light that illuminates only dimly. It will be some time before GPLP's implications for libraries and library print book collections can be fully appreciated and evaluated. But the strong interest and lively debate generated by this initiative suggest that some preliminary analysis - premature though it may be - would be useful, if only to undertake a rough mapping of the terrain over which GPLP potentially will extend. At the least, some early perspective helps shape interesting questions for the future, when the boundaries of GPLP become settled, workflows for producing and managing the digitized materials become systematized, and usage patterns within the GPLP framework begin to emerge.
    This article offers some perspectives on GPLP in light of what is known about library print book collections in general, and those of the Google 5 in particular, from information in OCLC's WorldCat bibliographic database and holdings file. Questions addressed include: * Coverage: What proportion of the system-wide print book collection will GPLP potentially cover? What is the degree of holdings overlap across the print book collections of the five participating libraries? * Language: What is the distribution of languages associated with the print books held by the GPLP libraries? Which languages are predominant? * Copyright: What proportion of the GPLP libraries' print book holdings are out of copyright? * Works: How many distinct works are represented in the holdings of the GPLP libraries? How does a focus on works impact coverage and holdings overlap? * Convergence: What are the effects on coverage of using a different set of five libraries? What are the effects of adding the holdings of additional libraries to those of the GPLP libraries, and how do these effects vary by library type? These questions certainly do not exhaust the analytical possibilities presented by GPLP. More in-depth analysis might look at Google 5 coverage in particular subject areas; it also would be interesting to see how many books covered by the GPLP have already been digitized in other contexts. However, these questions are left to future studies. The purpose here is to explore a few basic questions raised by GPLP, and in doing so, provide an empirical context for the debate that is sure to continue for some time to come. A secondary objective is to lay some groundwork for a general set of questions that could be used to explore the implications of any mass digitization initiative. A suggested list of questions is provided in the conclusion of the article.
    Date
    26.12.2011 14:08:22
  8. Goldberga, A.: Synergy towards shared standards for ALM : Latvian scenario (2008) 0.04
    0.03980363 = product of:
      0.07960726 = sum of:
        0.07960726 = sum of:
          0.037263542 = weight(_text_:libraries in 2322) [ClassicSimilarity], result of:
            0.037263542 = score(doc=2322,freq=2.0), product of:
              0.1711139 = queryWeight, product of:
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.052088603 = queryNorm
              0.2177704 = fieldWeight in 2322, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.046875 = fieldNorm(doc=2322)
          0.042343717 = weight(_text_:22 in 2322) [ClassicSimilarity], result of:
            0.042343717 = score(doc=2322,freq=2.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.23214069 = fieldWeight in 2322, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2322)
      0.5 = coord(1/2)
    
    Abstract
    The report reflects the Latvian scenario in co-operation for standardization of memory institutions. Differences and problems as well as benefits and possible solutions, tasks and activities of Standardization Technical Committee for Archives, Libraries and Museums Work (MABSTK) are analysed. Map of standards as a vision for ALM collaboration in standardization and "Digitizer's Handbook" (translated in English) prepared by the Competence Centre for Digitization of the National Library of Latvia (NLL) are presented. Shortcut to building the National Digital Library Letonica and its digital architecture (with pilot project about the Latvian composer Jazeps Vitols and the digital collection of expresident of Latvia Vaira Vike-Freiberga) reflects the practical co-operation between different players.
    Date
    26.12.2011 13:33:22
  9. Franke, F.: ¬Das Framework for Information Literacy : neue Impulse für die Förderung von Informationskompetenz in Deutschland?! (2017) 0.04
    0.03980363 = product of:
      0.07960726 = sum of:
        0.07960726 = sum of:
          0.037263542 = weight(_text_:libraries in 2248) [ClassicSimilarity], result of:
            0.037263542 = score(doc=2248,freq=2.0), product of:
              0.1711139 = queryWeight, product of:
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.052088603 = queryNorm
              0.2177704 = fieldWeight in 2248, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.046875 = fieldNorm(doc=2248)
          0.042343717 = weight(_text_:22 in 2248) [ClassicSimilarity], result of:
            0.042343717 = score(doc=2248,freq=2.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.23214069 = fieldWeight in 2248, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2248)
      0.5 = coord(1/2)
    
    Abstract
    Das Framework for Information Literacy for Higher Education wurde im Januar 2016 vom Vorstand der Association of College & Research Libraries (ACRL) beschlossen. Es beruht auf der Idee von "Threshold Concepts" und sieht Informationskompetenz in einem engen Zusammenhang mit Wissenschaft und Forschung. Dadurch legt es bei der Vermittlung von Informationskompetenz eine starke Betonung auf das "Warum", nicht nur auf das "Was". Der Ansatz des Framework wird vielfach kontrovers diskutiert. Bietet er tatsächlich eine neue Sichtweise auf die Förderung von Informationskompetenz oder ist er überwiegend alter Wein in neuen Schläuchen? Kann das Framework neue Impulse für die Aktivitäten an den Bibliotheken in Deutschland setzen oder beschreibt es etwas, was wir längst machen? Der Beitrag versucht, Anregungen zu geben, welche Konsequenzen das Framework für unsere Kurse haben kann und welche veränderten Lernziele mit ihm verbunden sein können. Dabei plädiert er für ein umfassendes Verständnis von Informationskompetenz, das sich nicht auf Einzelaspekte wie Recherchekompetenz beschränkt.
    Source
    o-bib: Das offene Bibliotheksjournal. 4(2017) Nr.4, S.22-29
  10. Voß, J.: Classification of knowledge organization systems with Wikidata (2016) 0.04
    0.03980363 = product of:
      0.07960726 = sum of:
        0.07960726 = sum of:
          0.037263542 = weight(_text_:libraries in 3082) [ClassicSimilarity], result of:
            0.037263542 = score(doc=3082,freq=2.0), product of:
              0.1711139 = queryWeight, product of:
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.052088603 = queryNorm
              0.2177704 = fieldWeight in 3082, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.046875 = fieldNorm(doc=3082)
          0.042343717 = weight(_text_:22 in 3082) [ClassicSimilarity], result of:
            0.042343717 = score(doc=3082,freq=2.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.23214069 = fieldWeight in 3082, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3082)
      0.5 = coord(1/2)
    
    Pages
    S.15-22
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  11. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.03
    0.03447105 = product of:
      0.0689421 = sum of:
        0.0689421 = product of:
          0.20682628 = sum of:
            0.20682628 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.20682628 = score(doc=4388,freq=2.0), product of:
                0.44160777 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052088603 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  12. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.03447105 = product of:
      0.0689421 = sum of:
        0.0689421 = product of:
          0.20682628 = sum of:
            0.20682628 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.20682628 = score(doc=5669,freq=2.0), product of:
                0.44160777 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052088603 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  13. Danowski, P.: Authority files and Web 2.0 : Wikipedia and the PND. An Example (2007) 0.03
    0.033169694 = product of:
      0.06633939 = sum of:
        0.06633939 = sum of:
          0.031052953 = weight(_text_:libraries in 1291) [ClassicSimilarity], result of:
            0.031052953 = score(doc=1291,freq=2.0), product of:
              0.1711139 = queryWeight, product of:
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.052088603 = queryNorm
              0.18147534 = fieldWeight in 1291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1291)
          0.035286434 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
            0.035286434 = score(doc=1291,freq=2.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.19345059 = fieldWeight in 1291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1291)
      0.5 = coord(1/2)
    
    Abstract
    More and more users index everything on their own in the web 2.0. There are services for links, videos, pictures, books, encyclopaedic articles and scientific articles. All these services are library independent. But must that really be? Can't libraries help with their experience and tools to make user indexing better? On the experience of a project from German language Wikipedia together with the German person authority files (Personen Namen Datei - PND) located at German National Library (Deutsche Nationalbibliothek) I would like to show what is possible. How users can and will use the authority files, if we let them. We will take a look how the project worked and what we can learn for future projects. Conclusions - Authority files can have a role in the web 2.0 - there must be an open interface/ service for retrieval - everything that is indexed on the net with authority files can be easy integrated in a federated search - O'Reilly: You have to found ways that your data get more important that more it will be used
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  14. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.03
    0.031680778 = product of:
      0.063361555 = sum of:
        0.063361555 = sum of:
          0.035132404 = weight(_text_:libraries in 3608) [ClassicSimilarity], result of:
            0.035132404 = score(doc=3608,freq=4.0), product of:
              0.1711139 = queryWeight, product of:
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.052088603 = queryNorm
              0.2053159 = fieldWeight in 3608, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
          0.028229147 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
            0.028229147 = score(doc=3608,freq=2.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.15476047 = fieldWeight in 3608, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
      0.5 = coord(1/2)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
  15. Information als Rohstoff für Innovation : Programm der Bundesregierung 1996-2000 (1996) 0.03
    0.028229147 = product of:
      0.056458294 = sum of:
        0.056458294 = product of:
          0.11291659 = sum of:
            0.11291659 = weight(_text_:22 in 5449) [ClassicSimilarity], result of:
              0.11291659 = score(doc=5449,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.61904186 = fieldWeight in 5449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5449)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.1997 19:26:34
  16. Ask me[@sk.me]: your global information guide : der Wegweiser durch die Informationswelten (1996) 0.03
    0.028229147 = product of:
      0.056458294 = sum of:
        0.056458294 = product of:
          0.11291659 = sum of:
            0.11291659 = weight(_text_:22 in 5837) [ClassicSimilarity], result of:
              0.11291659 = score(doc=5837,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.61904186 = fieldWeight in 5837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5837)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30.11.1996 13:22:37
  17. Kosmos Weltatlas 2000 : Der Kompass für das 21. Jahrhundert. Inklusive Welt-Routenplaner (1999) 0.03
    0.028229147 = product of:
      0.056458294 = sum of:
        0.056458294 = product of:
          0.11291659 = sum of:
            0.11291659 = weight(_text_:22 in 4085) [ClassicSimilarity], result of:
              0.11291659 = score(doc=4085,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.61904186 = fieldWeight in 4085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4085)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7.11.1999 18:22:39
  18. Mitchell, J.S.: DDC 22 : an introduction (2003) 0.03
    0.027616004 = product of:
      0.055232007 = sum of:
        0.055232007 = product of:
          0.110464014 = sum of:
            0.110464014 = weight(_text_:22 in 1936) [ClassicSimilarity], result of:
              0.110464014 = score(doc=1936,freq=10.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.6055961 = fieldWeight in 1936, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1936)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Dewey Decimal Classification and Relative Index, Edition 22 (DDC 22) will be issued simultaneously in print and web versions in July 2003. The new edition is the first full print update to the Dewey Decimal Classification system in seven years-it includes several significant updates and many new numbers and topics. DDC 22 also features some fundamental structural changes that have been introduced with the goals of promoting classifier efficiency and improving the DDC for use in a variety of applications in the web environment. Most importantly, the content of the new edition has been shaped by the needs and recommendations of Dewey users around the world. The worldwide user community has an important role in shaping the future of the DDC.
    Object
    DDC-22
  19. Baker, T.: ¬A grammar of Dublin Core (2000) 0.03
    0.026535753 = product of:
      0.053071506 = sum of:
        0.053071506 = sum of:
          0.024842361 = weight(_text_:libraries in 1236) [ClassicSimilarity], result of:
            0.024842361 = score(doc=1236,freq=2.0), product of:
              0.1711139 = queryWeight, product of:
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.052088603 = queryNorm
              0.14518027 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
          0.028229147 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
            0.028229147 = score(doc=1236,freq=2.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.15476047 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
      0.5 = coord(1/2)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  20. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.024951275 = product of:
      0.04990255 = sum of:
        0.04990255 = product of:
          0.0998051 = sum of:
            0.0998051 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.0998051 = score(doc=3925,freq=4.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28

Years

Languages

  • e 256
  • d 94
  • a 5
  • el 2
  • i 1
  • nl 1
  • More… Less…

Types

  • a 190
  • s 14
  • i 12
  • m 7
  • r 7
  • b 3
  • p 2
  • x 2
  • n 1
  • More… Less…