Search (168 results, page 1 of 9)

  • × type_ss:"el"
  1. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.11
    0.11420862 = product of:
      0.22841723 = sum of:
        0.22841723 = sum of:
          0.17864458 = weight(_text_:mass in 40) [ClassicSimilarity], result of:
            0.17864458 = score(doc=40,freq=2.0), product of:
              0.3481707 = queryWeight, product of:
                6.634292 = idf(docFreq=157, maxDocs=44218)
                0.05248046 = queryNorm
              0.51309484 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.634292 = idf(docFreq=157, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
          0.049772646 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
            0.049772646 = score(doc=40,freq=2.0), product of:
              0.18377763 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05248046 = queryNorm
              0.2708308 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
      0.5 = coord(1/2)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
  2. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.06946074 = product of:
      0.13892148 = sum of:
        0.13892148 = product of:
          0.41676444 = sum of:
            0.41676444 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.41676444 = score(doc=1826,freq=2.0), product of:
                0.44492993 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05248046 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  3. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.06
    0.06480306 = product of:
      0.12960611 = sum of:
        0.12960611 = sum of:
          0.108274974 = weight(_text_:mass in 1184) [ClassicSimilarity], result of:
            0.108274974 = score(doc=1184,freq=4.0), product of:
              0.3481707 = queryWeight, product of:
                6.634292 = idf(docFreq=157, maxDocs=44218)
                0.05248046 = queryNorm
              0.31098244 = fieldWeight in 1184, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.634292 = idf(docFreq=157, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
          0.021331133 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.021331133 = score(doc=1184,freq=2.0), product of:
              0.18377763 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05248046 = queryNorm
              0.116070345 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
      0.5 = coord(1/2)
    
    Abstract
    Google's December 2004 announcement of its intention to collaborate with five major research libraries - Harvard University, the University of Michigan, Stanford University, the University of Oxford, and the New York Public Library - to digitize and surface their print book collections in the Google searching universe has, predictably, stirred conflicting opinion, with some viewing the project as a welcome opportunity to enhance the visibility of library collections in new environments, and others wary of Google's prospective role as gateway to these collections. The project has been vigorously debated on discussion lists and blogs, with the participating libraries commonly referred to as "the Google 5". One point most observers seem to concede is that the questions raised by this initiative are both timely and significant. The Google Print Library Project (GPLP) has galvanized a long overdue, multi-faceted discussion about library print book collections. The print book is core to library identity and practice, but in an era of zero-sum budgeting, it is almost inevitable that print book budgets will decline as budgets for serials, digital resources, and other materials expand. As libraries re-allocate resources to accommodate changing patterns of user needs, print book budgets may be adversely impacted. Of course, the degree of impact will depend on a library's perceived mission. A public library may expect books to justify their shelf-space, with de-accession the consequence of minimal use. A national library, on the other hand, has a responsibility to the scholarly and cultural record and may seek to collect comprehensively within particular areas, with the attendant obligation to secure the long-term retention of its print book collections. The combination of limited budgets, changing user needs, and differences in library collection strategies underscores the need to think about a collective, or system-wide, print book collection - in particular, how can an inter-institutional system be organized to achieve goals that would be difficult, and/or prohibitively expensive, for any one library to undertake individually [4]? Mass digitization programs like GPLP cast new light on these and other issues surrounding the future of library print book collections, but at this early stage, it is light that illuminates only dimly. It will be some time before GPLP's implications for libraries and library print book collections can be fully appreciated and evaluated. But the strong interest and lively debate generated by this initiative suggest that some preliminary analysis - premature though it may be - would be useful, if only to undertake a rough mapping of the terrain over which GPLP potentially will extend. At the least, some early perspective helps shape interesting questions for the future, when the boundaries of GPLP become settled, workflows for producing and managing the digitized materials become systematized, and usage patterns within the GPLP framework begin to emerge.
    This article offers some perspectives on GPLP in light of what is known about library print book collections in general, and those of the Google 5 in particular, from information in OCLC's WorldCat bibliographic database and holdings file. Questions addressed include: * Coverage: What proportion of the system-wide print book collection will GPLP potentially cover? What is the degree of holdings overlap across the print book collections of the five participating libraries? * Language: What is the distribution of languages associated with the print books held by the GPLP libraries? Which languages are predominant? * Copyright: What proportion of the GPLP libraries' print book holdings are out of copyright? * Works: How many distinct works are represented in the holdings of the GPLP libraries? How does a focus on works impact coverage and holdings overlap? * Convergence: What are the effects on coverage of using a different set of five libraries? What are the effects of adding the holdings of additional libraries to those of the GPLP libraries, and how do these effects vary by library type? These questions certainly do not exhaust the analytical possibilities presented by GPLP. More in-depth analysis might look at Google 5 coverage in particular subject areas; it also would be interesting to see how many books covered by the GPLP have already been digitized in other contexts. However, these questions are left to future studies. The purpose here is to explore a few basic questions raised by GPLP, and in doing so, provide an empirical context for the debate that is sure to continue for some time to come. A secondary objective is to lay some groundwork for a general set of questions that could be used to explore the implications of any mass digitization initiative. A suggested list of questions is provided in the conclusion of the article.
    Date
    26.12.2011 14:08:22
  4. Rauber, A.: Digital preservation in data-driven science : on the importance of process capture, preservation and validation (2012) 0.06
    0.05688828 = product of:
      0.11377656 = sum of:
        0.11377656 = product of:
          0.34132966 = sum of:
            0.34132966 = weight(_text_:object's in 469) [ClassicSimilarity], result of:
              0.34132966 = score(doc=469,freq=2.0), product of:
                0.51982564 = queryWeight, product of:
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.05248046 = queryNorm
                0.65662336 = fieldWeight in 469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  9.905128 = idf(docFreq=5, maxDocs=44218)
                  0.046875 = fieldNorm(doc=469)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Current digital preservation is strongly biased towards data objects: digital files of document-style objects, or encapsulated and largely self-contained objects. To provide authenticity and provenance information, comprehensive metadata models are deployed to document information on an object's context. Yet, we claim that simply documenting an objects context may not be sufficient to ensure proper provenance and to fulfill the stated preservation goals. Specifically in e-Science and business settings, capturing, documenting and preserving entire processes may be necessary to meet the preservation goals. We thus present an approach for capturing, documenting and preserving processes, and means to assess their authenticity upon re-execution. We will discuss options as well as limitations and open challenges to achieve sound preservation, speci?cally within scientific processes.
  5. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.06
    0.05556859 = product of:
      0.11113718 = sum of:
        0.11113718 = product of:
          0.33341154 = sum of:
            0.33341154 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.33341154 = score(doc=230,freq=2.0), product of:
                0.44492993 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05248046 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  6. Markey, K.: ¬The online library catalog : paradise lost and paradise regained? (2007) 0.04
    0.03867769 = product of:
      0.07735538 = sum of:
        0.07735538 = product of:
          0.15471075 = sum of:
            0.15471075 = weight(_text_:mass in 1172) [ClassicSimilarity], result of:
              0.15471075 = score(doc=1172,freq=6.0), product of:
                0.3481707 = queryWeight, product of:
                  6.634292 = idf(docFreq=157, maxDocs=44218)
                  0.05248046 = queryNorm
                0.4443532 = fieldWeight in 1172, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.634292 = idf(docFreq=157, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1172)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The impetus for this essay is the library community's uncertainty regarding the present and future direction of the library catalog in the era of Google and mass digitization projects. The uncertainty is evident at the highest levels. Deanna Marcum, Associate Librarian for Library Services at the Library of Congress (LC), is struck by undergraduate students who favor digital resources over the online library catalog because such resources are available at anytime and from anywhere (Marcum, 2006). She suggests that "the detailed attention that we have been paying to descriptive cataloging may no longer be justified ... retooled catalogers could give more time to authority control, subject analysis, [and] resource identification and evaluation" (Marcum, 2006, 8). In an abrupt about-face, LC terminated series added entries in cataloging records, one of the few subject-rich fields in such records (Cataloging Policy and Support Office, 2006). Mann (2006b) and Schniderman (2006) cite evidence of LC's prevailing viewpoint in favor of simplifying cataloging at the expense of subject cataloging. LC commissioned Karen Calhoun (2006) to prepare a report on "revitalizing" the online library catalog. Calhoun's directive is clear: divert resources from cataloging mass-produced formats (e.g., books) to cataloging the unique primary sources (e.g., archives, special collections, teaching objects, research by-products). She sums up her rationale for such a directive, "The existing local catalog's market position has eroded to the point where there is real concern for its ability to weather the competition for information seekers' attention" (p. 10). At the University of California Libraries (2005), a task force's recommendations parallel those in Calhoun report especially regarding the elimination of subject headings in favor of automatically generated metadata. Contemplating these events prompted me to revisit the glorious past of the online library catalog. For a decade and a half beginning in the early 1980s, the online library catalog was the jewel in the crown when people eagerly queued at its terminals to find information written by the world's experts. I despair how eagerly people now embrace Google because of the suspect provenance of the information Google retrieves. Long ago, we could have added more value to the online library catalog but the only thing we changed was the catalog's medium. Our failure to act back then cost the online catalog the crown. Now that the era of mass digitization has begun, we have a second chance at redesigning the online library catalog, getting it right, coaxing back old users, and attracting new ones. Let's revisit the past, reconsidering missed opportunities, reassessing their merits, combining them with new directions, making bold decisions and acting decisively on them.
  7. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.03
    0.03473037 = product of:
      0.06946074 = sum of:
        0.06946074 = product of:
          0.20838222 = sum of:
            0.20838222 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.20838222 = score(doc=4388,freq=2.0), product of:
                0.44492993 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05248046 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  8. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.03473037 = product of:
      0.06946074 = sum of:
        0.06946074 = product of:
          0.20838222 = sum of:
            0.20838222 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.20838222 = score(doc=5669,freq=2.0), product of:
                0.44492993 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05248046 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  9. OWL Web Ontology Language Guide (2004) 0.03
    0.03190082 = product of:
      0.06380164 = sum of:
        0.06380164 = product of:
          0.12760328 = sum of:
            0.12760328 = weight(_text_:mass in 4687) [ClassicSimilarity], result of:
              0.12760328 = score(doc=4687,freq=2.0), product of:
                0.3481707 = queryWeight, product of:
                  6.634292 = idf(docFreq=157, maxDocs=44218)
                  0.05248046 = queryNorm
                0.36649632 = fieldWeight in 4687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.634292 = idf(docFreq=157, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4687)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The World Wide Web as it is currently constituted resembles a poorly mapped geography. Our insight into the documents and capabilities available are based on keyword searches, abetted by clever use of document connectivity and usage patterns. The sheer mass of this data is unmanageable without powerful tool support. In order to map this terrain more precisely, computational agents require machine-readable descriptions of the content and capabilities of Web accessible resources. These descriptions must be in addition to the human-readable versions of that information. The OWL Web Ontology Language is intended to provide a language that can be used to describe the classes and relations between them that are inherent in Web documents and applications. This document demonstrates the use of the OWL language to - formalize a domain by defining classes and properties of those classes, - define individuals and assert properties about them, and - reason about these classes and individuals to the degree permitted by the formal semantics of the OWL language. The sections are organized to present an incremental definition of a set of classes, properties and individuals, beginning with the fundamentals and proceeding to more complex language components.
  10. Standage, T.: Information overload is nothing new (2018) 0.03
    0.03190082 = product of:
      0.06380164 = sum of:
        0.06380164 = product of:
          0.12760328 = sum of:
            0.12760328 = weight(_text_:mass in 4473) [ClassicSimilarity], result of:
              0.12760328 = score(doc=4473,freq=2.0), product of:
                0.3481707 = queryWeight, product of:
                  6.634292 = idf(docFreq=157, maxDocs=44218)
                  0.05248046 = queryNorm
                0.36649632 = fieldWeight in 4473, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.634292 = idf(docFreq=157, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4473)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Overflowing inboxes, endlessly topped up by incoming emails. Constant alerts, notifications and text messages on your smartphone and computer. Infinitely scrolling streams of social-media posts. Access to all the music ever recorded, whenever you want it. And a deluge of high-quality television, with new series released every day on Netflix, Amazon Prime and elsewhere. The bounty of the internet is a marvellous thing, but the ever-expanding array of material can leave you feeling overwhelmed, constantly interrupted, unable to concentrate or worried that you are missing out or falling behind. No wonder some people are quitting social media, observing "digital sabbaths" when they unplug from the internet for a day, or buying old-fashioned mobile phones in an effort to avoid being swamped. This phenomenon may seem quintessentially modern, but it dates back centuries, as Ann Blair of Harvard University observes in "Too Much to Know", a history of information overload. Half a millennium ago, the printing press was to blame. "Is there anywhere on Earth exempt from these swarms of new books?" moaned Erasmus in 1525. New titles were appearing in such abundance, thousands every year. How could anyone figure out which ones were worth reading? Overwhelmed scholars across Europe worried that good ideas were being lost amid the deluge. Francisco Sanchez, a Spanish philosopher, complained in 1581 that 10m years was not long enough to read all the books in existence. The German polymath Gottfried Wilhelm Leibniz grumbled in 1680 of "that horrible mass of books which keeps on growing"."
  11. Information als Rohstoff für Innovation : Programm der Bundesregierung 1996-2000 (1996) 0.03
    0.028441511 = product of:
      0.056883022 = sum of:
        0.056883022 = product of:
          0.113766044 = sum of:
            0.113766044 = weight(_text_:22 in 5449) [ClassicSimilarity], result of:
              0.113766044 = score(doc=5449,freq=2.0), product of:
                0.18377763 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05248046 = queryNorm
                0.61904186 = fieldWeight in 5449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5449)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.1997 19:26:34
  12. Ask me[@sk.me]: your global information guide : der Wegweiser durch die Informationswelten (1996) 0.03
    0.028441511 = product of:
      0.056883022 = sum of:
        0.056883022 = product of:
          0.113766044 = sum of:
            0.113766044 = weight(_text_:22 in 5837) [ClassicSimilarity], result of:
              0.113766044 = score(doc=5837,freq=2.0), product of:
                0.18377763 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05248046 = queryNorm
                0.61904186 = fieldWeight in 5837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5837)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30.11.1996 13:22:37
  13. Kosmos Weltatlas 2000 : Der Kompass für das 21. Jahrhundert. Inklusive Welt-Routenplaner (1999) 0.03
    0.028441511 = product of:
      0.056883022 = sum of:
        0.056883022 = product of:
          0.113766044 = sum of:
            0.113766044 = weight(_text_:22 in 4085) [ClassicSimilarity], result of:
              0.113766044 = score(doc=4085,freq=2.0), product of:
                0.18377763 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05248046 = queryNorm
                0.61904186 = fieldWeight in 4085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4085)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7.11.1999 18:22:39
  14. Mitchell, J.S.: DDC 22 : an introduction (2003) 0.03
    0.027823756 = product of:
      0.05564751 = sum of:
        0.05564751 = product of:
          0.11129502 = sum of:
            0.11129502 = weight(_text_:22 in 1936) [ClassicSimilarity], result of:
              0.11129502 = score(doc=1936,freq=10.0), product of:
                0.18377763 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05248046 = queryNorm
                0.6055961 = fieldWeight in 1936, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1936)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Dewey Decimal Classification and Relative Index, Edition 22 (DDC 22) will be issued simultaneously in print and web versions in July 2003. The new edition is the first full print update to the Dewey Decimal Classification system in seven years-it includes several significant updates and many new numbers and topics. DDC 22 also features some fundamental structural changes that have been introduced with the goals of promoting classifier efficiency and improving the DDC for use in a variety of applications in the web environment. Most importantly, the content of the new edition has been shaped by the needs and recommendations of Dewey users around the world. The worldwide user community has an important role in shaping the future of the DDC.
    Object
    DDC-22
  15. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.03
    0.025520656 = product of:
      0.051041313 = sum of:
        0.051041313 = product of:
          0.102082625 = sum of:
            0.102082625 = weight(_text_:mass in 2596) [ClassicSimilarity], result of:
              0.102082625 = score(doc=2596,freq=2.0), product of:
                0.3481707 = queryWeight, product of:
                  6.634292 = idf(docFreq=157, maxDocs=44218)
                  0.05248046 = queryNorm
                0.29319707 = fieldWeight in 2596, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.634292 = idf(docFreq=157, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2596)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.03
    0.025520656 = product of:
      0.051041313 = sum of:
        0.051041313 = product of:
          0.102082625 = sum of:
            0.102082625 = weight(_text_:mass in 3965) [ClassicSimilarity], result of:
              0.102082625 = score(doc=3965,freq=2.0), product of:
                0.3481707 = queryWeight, product of:
                  6.634292 = idf(docFreq=157, maxDocs=44218)
                  0.05248046 = queryNorm
                0.29319707 = fieldWeight in 3965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.634292 = idf(docFreq=157, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3965)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper discusses the importance of these initiatives in releasing as linked data the very large quantities of rich, professionally-generated metadata stored in formats based on these standards, such as UNIMARC and MARC21, addressing such issues as critical mass for semantic and statistical inferencing, integration with user- and machine-generated metadata, and authenticity, veracity and trust. The paper also discusses related initiatives to release controlled vocabularies, including the Dewey Decimal Classification (DDC), ISBD, Library of Congress Name Authority File (LCNAF), Library of Congress Subject Headings (LCSH), Rameau (French subject headings), Universal Decimal Classification (UDC), and the Virtual International Authority File (VIAF) as linked data. Finally, the paper discusses the potential collective impact of these initiatives on metadata workflows and management systems.
  17. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.03
    0.025138982 = product of:
      0.050277963 = sum of:
        0.050277963 = product of:
          0.10055593 = sum of:
            0.10055593 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.10055593 = score(doc=3925,freq=4.0), product of:
                0.18377763 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05248046 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  18. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.03
    0.025138982 = product of:
      0.050277963 = sum of:
        0.050277963 = product of:
          0.10055593 = sum of:
            0.10055593 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.10055593 = score(doc=3582,freq=4.0), product of:
                0.18377763 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05248046 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  19. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.02
    0.024886323 = product of:
      0.049772646 = sum of:
        0.049772646 = product of:
          0.09954529 = sum of:
            0.09954529 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.09954529 = score(doc=8365,freq=2.0), product of:
                0.18377763 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05248046 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2015 16:08:38
  20. Vögel unserer Heimat (1999) 0.02
    0.024886323 = product of:
      0.049772646 = sum of:
        0.049772646 = product of:
          0.09954529 = sum of:
            0.09954529 = weight(_text_:22 in 4084) [ClassicSimilarity], result of:
              0.09954529 = score(doc=4084,freq=2.0), product of:
                0.18377763 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05248046 = queryNorm
                0.5416616 = fieldWeight in 4084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4084)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7.11.1999 18:22:54

Years

Languages

  • d 86
  • e 76
  • el 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 73
  • i 10
  • m 5
  • s 3
  • b 2
  • n 2
  • r 2
  • x 1
  • More… Less…